modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LarryAIDraw/Maika
|
LarryAIDraw
| 2023-06-23T04:51:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T04:39:14Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/94904/maika-saku-blend-s-lora
|
LarryAIDraw/alice-09
|
LarryAIDraw
| 2023-06-23T04:51:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T04:38:50Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/94924/alice-lendrott-or-shinigami-bocchan-to-kuro-maid-lora
|
LarryAIDraw/NewJerseyVRerun
|
LarryAIDraw
| 2023-06-23T04:50:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T04:37:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/94550/uss-new-jersey-or-1mb-azur-lane-or
|
LarryAIDraw/sxz-jill-valentine-v3
|
LarryAIDraw
| 2023-06-23T04:49:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T04:36:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/15027/sxz-jill-valentine-sasha-zotova-julia-voth-resident-evil
|
gaiamolinaro/dqn-SpaceInvadersNoFrameskip-v4
|
gaiamolinaro
| 2023-06-23T04:37:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-23T04:37:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 676.50 +/- 216.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaiamolinaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaiamolinaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gaiamolinaro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jackie68/detr-resnet-50_finetuned_cppe5
|
jackie68
| 2023-06-23T04:18:26Z | 187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-23T03:09:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mszpro/CoreML_ControlNet_SplitEsum
|
mszpro
| 2023-06-23T04:07:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-22T09:08:24Z |
## ControlNet for CoreML
This repository contains some of the ControlNet models that are built for CoreML, for use on iPhone, iPad, and Mac.
It has been built with Split Ensum, so it can run on Nerual Engine.
It contains the following:
- Tile
- Brightness
|
kenagapito/distilhubert-finetuned-gtzan
|
kenagapito
| 2023-06-23T03:54:47Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-22T11:29:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9412
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0684 | 1.0 | 113 | 1.7043 | 0.39 |
| 1.1044 | 2.0 | 226 | 1.0855 | 0.62 |
| 0.84 | 3.0 | 339 | 1.0662 | 0.67 |
| 0.6802 | 4.0 | 452 | 0.7272 | 0.75 |
| 0.4728 | 5.0 | 565 | 0.6389 | 0.86 |
| 0.4119 | 6.0 | 678 | 0.8692 | 0.78 |
| 0.0436 | 7.0 | 791 | 1.0113 | 0.82 |
| 0.0082 | 8.0 | 904 | 0.8984 | 0.83 |
| 0.0442 | 9.0 | 1017 | 1.0056 | 0.81 |
| 0.0024 | 10.0 | 1130 | 0.9412 | 0.81 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pellucid/my_awesome_spotify_clm-model
|
pellucid
| 2023-06-23T03:46:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-23T02:19:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_spotify_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_spotify_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1248 | 1.0 | 6124 | 1.0846 |
| 1.0669 | 2.0 | 12248 | 1.0487 |
| 1.0464 | 3.0 | 18372 | 1.0040 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Fre2C/UnreaLibrary-Mix
|
Fre2C
| 2023-06-23T03:30:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T06:01:37Z |
---
license: creativeml-openrail-m
---
**civitai**:https://civitai.com/models/91609/unrealibrary-mix (更多预览在civitai/More preview in civitai)
所有的预览图没有使用embedding,lora
我的融合模型:
**DreaMirror**:https://civitai.com/models/30294 / https://huggingface.co/Fre2C/DreaMirror-Mix
**UnreaLibrary**:https://civitai.com/models/91609 / https://huggingface.co/Fre2C/UnreaLibrary-Mix
**这个模型的方向是尽可能忠于提示词(在2D模型中这好像有点难),保留2D模型的创造性(所以我并没有融合3D/2.5D模型。~~所以大部分时间都在与手部进行搏斗~~),合适的明暗对比。**
**你可以用它尝试任何东西!**
**从以下地方学习了很多,十分感谢。**
https://huggingface.co/WarriorMama777/OrangeMixs
https://civitai.com/models/9409/or-anything-v5
https://economylife.net/u-net-marge-webui1111/
https://docs.qq.com/doc/DTkRodlJ1c1VzcFBr?u=e7c714671e694797a04f1d58aff5c8b0
https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0
https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka-Fork%E3%80%8D?type=design&node-id=1-69
**使用建议:**
脸部出现崩坏的情况,以及想**提升面部质量**,使用**局部重绘** **重绘区域**使用**仅蒙版**(效果最好)获得**更好的面部**,或使用Hires. fix改善,使用**其他随机种**或者**工具**也是不错的办法。
**较高**的分辨率(比512 * 512高一点)再加上**Hires. fix**,图片质量**会更好**(如果显存不够你可以尝试**低倍率**的Hires. fix或者**其他放大方法**)。
**用于画面质量的正面提示词(像 best quality)是不必要的,会减少画面的可能性,还会使画面趋于一种风格。**
**将你原本用在正面质量提示词上的权重,用在负面质量提示词上,那是更好的选择。**
如果觉得画面内容**不够丰富**,你可以尝试**细致地描述**,使画面更加**贴近你的想象**。
**提示词的权重以及顺序会影响它在画面里的重要程度。**
**如果有无法作出反应的提示词,请按以下顺序排查问题:同义词(同一概念的不同描述),提示词冲突(正面和负面),模型问题(看其他模型能否对同样的提示词作出反应)**,embedding(我并没有使用它的习惯,但考虑到它的原理,我将它放上来作为参考)。
*如果你想用很少的提示词抽奖的话,最好把雨伞(umbrella)加进负面提示词(至少在V1是这样的)。*
**我一般在效果不符合预期时使用clip2。**
**随你喜好使用lora!**
All preview images do not use embedding,lora.
**The direction of this model is to be as faithful as possible to the prompt words(This seems a bit difficult in a 2D model), preserve the creativity of 2D models(So I did not merge the 3D/2.5D models. ~~So most of the time is fighting with the hands~~), appropriate light and dark contrast.**
**You can try anything with it!**
**I have learned a lot from the following places, thank you very much.**
https://huggingface.co/WarriorMama777/OrangeMixs
https://civitai.com/models/9409/or-anything-v5
https://economylife.net/u-net-marge-webui1111/
https://rentry.org/Merge_Block_Weight_-china-_v1_Beta#1-introduction(This is the translated version)
https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0
https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka-Fork%E3%80%8D?type=design&node-id=1-69
**Suggestions for use:**
If the face appears to be falling apart, and you want to **improve the quality of the face**, use **Inpaint** and **Inpaint area** use **only Masked** (Best results) to get a **better face**, or use **Hires. fix** to improve, use **other seed** or **tools** is also a good way.
**Higher** resolution (a little higher than 512 * 512) plus **Hires. fix**, picture quality will **be better** (if the gpu memory is not enough you can try a **Low magnification** of Hires. fix or **other upscale tools**).
**Positive prompt for image quality (like best quality) are unnecessary and reduce the possibilities of the picture, also make the picture tend to be in a style.**
**It's better to Use the weight you would have used for positive quality prompt on negative quality prompt.**
If you feel that the content of the picture is **not rich enough**, You can try to **describe in detail** to make the picture more **closely to your imagination.**
*If you want to sweepstakes with few prompts, it is better to add umbrella to the negative prompt (at least in V1).*
**The weight of the prompt word and the order in which it is used affects how important it is in the picture.**
**If there are prompt words that you cannot respond to, please rank the problems in the following order: synonyms (different descriptions of the same concept), prompt word conflicts (positive and negative), model problems (see if other models can respond to the same prompt words),** embedding (I am not in the habit of using it, but considering its rationale, I put it up as a reference).
**I usually use clip2 when the results don't meet expectations.**
**Use lora as you like!**
我使用这两个VAE/I use these two VAEs:
https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
https://civitai.com/models/22354/clearvae
**V1**






**使用的模型/Models used**
kawaiimixNijiV5Cute_v10【58f37f4736】
Counterfeit-V3.0_fp32【17277FBE68】
pikasNewGeneration_v20【6C509880A5】
breakdomainanime_A0440【1870FA10C3】
plagion_v10【0C42B21C09】
AnythingV5V3_v5PrtRE【7f96a1a9ca】
tComicV35_v35【25750140EA】
**配方/Recipe**
**use:https://github.com/hako-mikan/sd-webui-supermerger/**
kawaiimixNijiV5Cute_v10 x (1-alpha) + Counterfeit-V3.0_fp32 x alpha)x(1-beta)+ pikasNewGeneration_v20 x beta
alpha:0.7,1.0,0.9,0.8,0.7,0.6,0.6,0.7,0.8,0.9,0.7,0.5,0.7,0.7,0.85,0.75,0.65,0.75,0.85,0.75,0.65,0.75,0.85,0.9,0.8,0.8
beta:0.75,0.35,0.45,0.55,0.65,0.75,0.85,0.75,0.85,0.75,0.6,0.6,0.6,0.5,0.35,0.45,0.55,0.6,0.65,0.55,0.6,0.5,0.35,0.4,0.5,0.4
**Named as step1**
breakdomainanime_A0440 x (1-alpha) + plagion_v10 x alpha)x(1-beta)+ step1 x beta
alpha:0.25,0.35,0.45,0.55,0.65,0.55,0.45,0.55,0.4,0.6,0.7,0.75,0.8,0.4,0.4,0.5,0.6,0.7,0.8,0.6,0.5,0.4,0.5,0.4,0.7,0.7
beta:0.7,0.85,0.75,0.65,0.55,0.7,0.6,0.5,0.4,0.5,0.6,0.5,0.4,0.6,0.8,0.7,0.6,0.8,0.7,0.6,0.5,0.4,0.5,0.6,0.5,0.4
**Named as step2**
AnythingV5V3_v5PrtRE x (1-alpha) + tComicV35_v35 x alpha)x(1-beta)+ step2 x beta
alpha:0.65,0.75,0.65,0.75,0.65,0.75,0.65,0.75,0.85,1.0,0.85,0.75,0.85,0.4,0.65,0.75,0.65,0.45,0.3,0.15,0.3,0.45,0.65,0.75,0.8,0.8
beta:0.75,0.25,0.35,0.45,0.55,0.75,0.85,0.75,0.85,0.75,0.85,1.0,1.0,0.7,0.35,0.45,0.55,0.75,0.65,0.75,0.65,0.55,0.45,0.35,0.75,0.85
**prune and get final fp16 version**
|
hanguohuai/compassionmix_v10
|
hanguohuai
| 2023-06-23T02:43:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-01T03:55:21Z |
---
license: creativeml-openrail-m
---
|
jadechip/x-boonsie
|
jadechip
| 2023-06-23T02:33:20Z | 30 | 0 |
diffusers
|
[
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-23T02:30:15Z |
---
license: mit
---
## Dreambooth for people in a hurry - Colab edition
The Stable-Diffusion-v1-5 checkpoint is used as base model, and trained with custom concept.
### Concept info
Your full token: <b>x_boonsie woman</b>
Example prompt: Professional headshot photo of x_boonsie woman as a magician, Tiffen Digital Diffusion / FX, 100mm
Instance prompt: photo of x_boonsie woman
Class prompt: photo of a woman
### Model info
Training images: 6
Regularization images: 50
Model type: Diffusers, Checkpoint
training_steps: 1000
lr_scheduler: constant
lr_warmup_steps: 0
learning rate: 1e-6
mixed_precision: fp16
|
arminmrm93/Reinforce-Pixelcopter-PLE-v0
|
arminmrm93
| 2023-06-23T02:14:49Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-23T02:14:44Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.30 +/- 29.27
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gongliyu/my_awesome_billsum_model
|
gongliyu
| 2023-06-23T02:03:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T01:30:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4361
- Rouge1: 0.1396
- Rouge2: 0.0447
- Rougel: 0.1173
- Rougelsum: 0.1174
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 8 | 4.1677 | 0.1431 | 0.0488 | 0.121 | 0.1211 | 19.0 |
| No log | 2.0 | 16 | 3.7195 | 0.1417 | 0.0471 | 0.1197 | 0.1198 | 19.0 |
| No log | 3.0 | 24 | 3.5005 | 0.1408 | 0.0456 | 0.1185 | 0.1185 | 19.0 |
| No log | 4.0 | 32 | 3.4361 | 0.1396 | 0.0447 | 0.1173 | 0.1174 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pln-fing-udelar/robertuito-HUHU-task2b
|
pln-fing-udelar
| 2023-06-23T01:57:02Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2023-06-23T00:54:48Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task2b
results: []
inference: false
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task2b
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the regression task of predicting on a continuous scale from 1 to 5 how prejudicial a tweet (considered to be hurtful or conveying prejudice in some way) is on average among minority groups.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2750 | 1 |
| 0.4855 | 2 |
| 0.3276 | 3 |
| 0.1917 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
DB112/training1
|
DB112
| 2023-06-23T01:55:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-23T01:55:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Ggxcc4566/Eff
|
Ggxcc4566
| 2023-06-23T01:40:00Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-23T01:40:00Z |
---
license: bigscience-bloom-rail-1.0
---
|
AlgorithmicResearchGroup/arxiv-distilroberta-base-GenQ
|
AlgorithmicResearchGroup
| 2023-06-23T01:24:13Z | 11 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T02:50:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
language:
- en
library_name: sentence-transformers
---
# Arxiv-distilroberta-base-GenQ
Arxiv-distilroberta-base-GenQ is trained on [ArtifactAI/arxiv-beir-500k-generated-queries](ArtifactAI/arxiv-beir-500k-generated-queries), a large corpus of 500k question/abstract pairs extracted from the ArXiv dataset. It is designed to encode and transform sentences from academic papers, allowing for effective semantic similarity and information retrieval tasks.
It maps sentences & paragraphs to a 768 dimensional dense vector space.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ArtifactAI/arxiv-distilroberta-base-GenQ')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ArtifactAI/arxiv-distilroberta-base-GenQ')
model = AutoModel.from_pretrained('ArtifactAI/arxiv-distilroberta-base-GenQ')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 23128 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2312,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{arxiv-distilroberta-base-GenQ,
title={arxiv-distilroberta-base-GenQ},
author={Matthew Kenney},
year={2023}
}
```
|
ka1yo/kaiyomixes
|
ka1yo
| 2023-06-23T01:14:33Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-04-02T14:48:50Z |
---
license: openrail
---
# Kaiyo Mixes
I'm new to using hugging face so this will act as a repository for some of my merged models.
Attached is the Notion page where I document my recipes for each model and some example images.
https://kaiyo.notion.site/Personal-Models-f5c0aff01eab48869699b958a66e4501
Please note that these images should not be used for commercial purposes
and the models should not be redistributed and sold for monetary gain.
Thanks for showing an interest in these merges!
- Kaiyo
|
evatan/cat_wo_prior
|
evatan
| 2023-06-23T00:49:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T14:36:29Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/cat_wo_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YoneShiro/ppo-PyramidsRND
|
YoneShiro
| 2023-06-23T00:48:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-23T00:46:46Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YoneShiro/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deandrasetya/indobert-abusive-language-classifier
|
deandrasetya
| 2023-06-23T00:17:37Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-14T10:14:29Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: indobert-abusive-language-classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# indobert-abusive-language-classifier
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1613
- Train Sparse Categorical Accuracy: 0.9417
- Validation Loss: 0.2973
- Validation Sparse Categorical Accuracy: 0.8857
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.4496 | 0.7811 | 0.3146 | 0.8671 | 0 |
| 0.2437 | 0.9026 | 0.2959 | 0.8888 | 1 |
| 0.1613 | 0.9417 | 0.2973 | 0.8857 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Dans-Archive/Dans-PersonalityEngine-30b
|
Dans-Archive
| 2023-06-23T00:14:59Z | 52 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T04:25:05Z |
---
language:
- en
---
### Description:
This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.
The training parameters were suboptimal for the most recent run and I decided to stop after 2 epochs as 3 likely would have overtrained it. I plan on iterating the model and improving it further when I have access to more funds to do so.
### Prompt format:
Metharme
The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```
Some example prompts:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
More will be added at a later date.
### Perplexity Benchmarks:
- TBA
### Training information:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- GPTQ 4 bit LoRA
- 2 Epochs
- 64 / 32 R / A
- 2048 Cutoff
- 42 hours on 1x RTX 4090
### Data used in training:
- TBA
### Models used:
For training:
https://huggingface.co/PocketDoc/llama-30b-gptq-4bit-128g
For merging:
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-30b-LoRA
and
https://huggingface.co/huggyllama/llama-30b
### Disclaimer:
It has not been aligned and no warranty is given for the quality or safety of its outputs.
|
KoboldAI/OPT-350M-Erebus
|
KoboldAI
| 2023-06-23T00:03:22Z | 1,520 | 15 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-11-13T11:56:06Z |
---
language: en
license: other
commercial: no
inference: false
---
# OPT 350M - Erebus
## Model description
This is the second generation of the original Shinen made by Mr. Seeker. The full dataset consists of 6 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The data can be divided in 6 different datasets:
- Literotica (everything with 4.5/5 or higher)
- Sexstories (everything with 90 or higher)
- Dataset-G (private dataset of X-rated stories)
- Doc's Lab (all stories)
- Pike Dataset (novels with "adult" rating)
- SoFurry (collection of various animals)
The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-350M-Erebus')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
## Limitations and biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
### License
OPT-350M is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### BibTeX entry and citation info
```
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Blackroot/airoboros-1.3-unstrct50sparse
|
Blackroot
| 2023-06-23T00:02:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T23:20:49Z |
33B airoboros 1.3 50% sparse unstructured pruned by wanda. <https://github.com/locuslab/wanda>
Perplexity of about 5.4
---
license: cc-by-nc-4.0
---
|
retrieval-bar/google_flan-t5-small_mbe_no_passage
|
retrieval-bar
| 2023-06-22T23:46:49Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T22:20:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
YoneShiro/ppo-SnowballTarget
|
YoneShiro
| 2023-06-22T23:46:08Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-22T23:46:01Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YoneShiro/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pln-fing-udelar/robertuito-HUHU-task2a-group4
|
pln-fing-udelar
| 2023-06-22T23:34:15Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T23:18:23Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task2a-group4
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task2a-group4
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) as PREJUDICE-OVERWEIGHT if it shows prejudice towards overweight people.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.1952 | 1 |
| 0.0340 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
pln-fing-udelar/robertuito-HUHU-task2a-group3
|
pln-fing-udelar
| 2023-06-22T23:34:04Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T23:12:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task2a-group3
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task2a-group3
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) as PREJUDICE-INMIGRANT-RACE if it shows prejudice towards immigrants or people’s race.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.2134 | 1 |
| 0.0248 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
rd124/marian-finetuned-samanantar100K-en-to-hi
|
rd124
| 2023-06-22T23:33:53Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-22T22:39:15Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-samanantar100K-en-to-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-samanantar100K-en-to-hi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9174
- Bleu: 18.0140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pln-fing-udelar/robertuito-HUHU-task2a-group2
|
pln-fing-udelar
| 2023-06-22T23:33:38Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T23:05:47Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task2a-group2
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task2a-group2
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) as PREJUDICE-LGBTIQ if it shows prejudice towards the LGBTIQ community.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.3088 | 1 |
| 0.0914 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
nahidhasan/my_awesome_qa_model
|
nahidhasan
| 2023-06-22T23:32:25Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-22T02:23:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nahidhasan/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nahidhasan/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7658
- Validation Loss: 1.9713
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4587 | 2.2304 | 0 |
| 1.9841 | 1.9713 | 1 |
| 1.7658 | 1.9713 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pln-fing-udelar/robertuito-HUHU-task2a-group1
|
pln-fing-udelar
| 2023-06-22T23:27:45Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T22:06:13Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task2a-group1
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task2a-group1
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) as PREJUDICE-WOMAN if it shows prejudice towards women.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.3371 | 1 |
| 0.1060 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
FlareX/tayko-36772
|
FlareX
| 2023-06-22T23:26:16Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T23:16:52Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tayko-36772 Dreambooth model trained by FlareX with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
RajkNakka/codeparrot-ds
|
RajkNakka
| 2023-06-22T23:13:30Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:21:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4948 | 0.93 | 5000 | 1.7735 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sheshenin/vikash3-2
|
sheshenin
| 2023-06-22T22:54:55Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T22:41:58Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VikaSH3_2 Dreambooth model trained by sheshenin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:






















|
GEMCorp/Reinforce-Pixelcopter-PLE-v0
|
GEMCorp
| 2023-06-22T22:44:48Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T22:43:54Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.90 +/- 24.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** (i.e Monte Carlo Policy Gradient) agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CMacD12/distilbert-base-uncased-finetuned-cola
|
CMacD12
| 2023-06-22T22:33:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T23:46:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CMacD12/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CMacD12/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1931
- Validation Loss: 0.5577
- Train Matthews Correlation: 0.5139
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5145 | 0.4719 | 0.4496 | 0 |
| 0.3232 | 0.4749 | 0.5091 | 1 |
| 0.1931 | 0.5577 | 0.5139 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.13.0-rc1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pln-fing-udelar/robertuito-HUHU-task1
|
pln-fing-udelar
| 2023-06-22T22:25:41Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T20:13:45Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task1
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task1
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) into humorous or non-humorous.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
intanm/fewshot-qa-003-20230623-001
|
intanm
| 2023-06-22T22:25:11Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-22T21:59:37Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: fewshot-qa-003-20230623-001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot-qa-003-20230623-001
This model is a fine-tuned version of [timpal0l/mdeberta-v3-base-squad2](https://huggingface.co/timpal0l/mdeberta-v3-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 208 | 2.2830 |
| No log | 2.0 | 416 | 2.2975 |
| 2.2077 | 3.0 | 624 | 2.4189 |
| 2.2077 | 4.0 | 832 | 2.7090 |
| 1.1515 | 5.0 | 1040 | 3.0032 |
| 1.1515 | 6.0 | 1248 | 3.3080 |
| 1.1515 | 7.0 | 1456 | 3.5268 |
| 0.6061 | 8.0 | 1664 | 3.5598 |
| 0.6061 | 9.0 | 1872 | 3.6973 |
| 0.3833 | 10.0 | 2080 | 3.7303 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AlekseyKorshuk/pygmalion-6b-vicuna-chatml
|
AlekseyKorshuk
| 2023-06-22T22:15:31Z | 1,491 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"generated_from_trainer",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T05:04:26Z |
---
license: creativeml-openrail-m
tags:
- generated_from_trainer
model-index:
- name: pygmalion-6b-vicuna-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pygmalion-6b-vicuna-chatml
This model is a fine-tuned version of [PygmalionAI/pygmalion-6b](https://huggingface.co/PygmalionAI/pygmalion-6b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
frncscp/ms-azure-bike-rentals
|
frncscp
| 2023-06-22T22:14:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tabular-regression",
"endpoints_compatible",
"region:us"
] |
tabular-regression
| 2023-06-05T20:33:35Z |
---
pipeline_tag: tabular-regression
library_name: transformers
---
|
Brandulio/Pyramids
|
Brandulio
| 2023-06-22T22:09:30Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-22T22:08:40Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Brandulio/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AI4PD/lact
|
AI4PD
| 2023-06-22T22:00:10Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T20:27:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [/home/woody/b114cb/b114cb10/zymCTRL/gpt2-large/config.json](https://huggingface.co//home/woody/b114cb/b114cb10/zymCTRL/gpt2-large/config.json) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2882 | 0.02 | 10 | 2.9581 |
| 2.5059 | 0.04 | 20 | 2.3844 |
| 2.3368 | 0.06 | 30 | 2.3644 |
| 2.3476 | 0.08 | 40 | 2.3494 |
| 2.3185 | 0.1 | 50 | 2.3697 |
| 2.3468 | 0.12 | 60 | 2.3255 |
| 2.262 | 0.14 | 70 | 2.2512 |
| 2.1646 | 0.16 | 80 | 2.1945 |
| 2.1558 | 0.18 | 90 | 2.1885 |
| 2.1934 | 0.2 | 100 | 2.1483 |
| 2.0855 | 0.22 | 110 | 2.1152 |
| 2.0844 | 0.24 | 120 | 2.0839 |
| 2.0647 | 0.26 | 130 | 2.0615 |
| 1.9665 | 0.28 | 140 | 2.0330 |
| 1.9761 | 0.3 | 150 | 2.0068 |
| 1.9428 | 0.32 | 160 | 1.9914 |
| 1.9351 | 0.34 | 170 | 1.9369 |
| 1.9366 | 0.36 | 180 | 1.9139 |
| 1.9548 | 0.38 | 190 | 1.8789 |
| 1.9625 | 0.4 | 200 | 1.8486 |
| 1.8584 | 0.42 | 210 | 1.8198 |
| 1.8857 | 0.44 | 220 | 1.8118 |
| 1.7574 | 0.46 | 230 | 1.7603 |
| 1.8114 | 0.48 | 240 | 1.7370 |
| 1.7303 | 0.5 | 250 | 1.7205 |
| 1.7535 | 0.52 | 260 | 1.7124 |
| 1.7775 | 0.54 | 270 | 1.7013 |
| 1.685 | 0.56 | 280 | 1.6612 |
| 1.5898 | 0.58 | 290 | 1.6578 |
| 1.7875 | 0.6 | 300 | 1.6458 |
| 1.628 | 0.62 | 310 | 1.6253 |
| 1.6186 | 0.64 | 320 | 1.6195 |
| 1.6899 | 0.66 | 330 | 1.6102 |
| 1.5908 | 0.68 | 340 | 1.5907 |
| 1.6514 | 0.7 | 350 | 1.6104 |
| 1.6027 | 0.72 | 360 | 1.5766 |
| 1.6319 | 0.74 | 370 | 1.5623 |
| 1.6103 | 0.76 | 380 | 1.5764 |
| 1.4518 | 0.78 | 390 | 1.5449 |
| 1.498 | 0.8 | 400 | 1.5345 |
| 1.5266 | 0.82 | 410 | 1.5413 |
| 1.5622 | 0.84 | 420 | 1.5229 |
| 1.4863 | 0.86 | 430 | 1.5208 |
| 1.5492 | 0.88 | 440 | 1.4996 |
| 1.5515 | 0.9 | 450 | 1.4857 |
| 1.4799 | 0.92 | 460 | 1.4935 |
| 1.4514 | 0.94 | 470 | 1.4745 |
| 1.5462 | 0.96 | 480 | 1.4784 |
| 1.6032 | 0.98 | 490 | 1.4911 |
| 1.7418 | 1.0 | 500 | 1.4733 |
| 1.4983 | 1.02 | 510 | 1.4646 |
| 1.5383 | 1.04 | 520 | 1.4442 |
| 1.3454 | 1.06 | 530 | 1.4332 |
| 1.3128 | 1.08 | 540 | 1.4261 |
| 1.5472 | 1.1 | 550 | 1.4232 |
| 1.252 | 1.12 | 560 | 1.3924 |
| 1.3538 | 1.14 | 570 | 1.3975 |
| 1.5448 | 1.16 | 580 | 1.3915 |
| 1.4016 | 1.18 | 590 | 1.4025 |
| 1.3041 | 1.2 | 600 | 1.3837 |
| 1.3857 | 1.22 | 610 | 1.3890 |
| 1.2923 | 1.24 | 620 | 1.3452 |
| 1.28 | 1.26 | 630 | 1.3492 |
| 1.4052 | 1.28 | 640 | 1.3254 |
| 1.3992 | 1.3 | 650 | 1.3670 |
| 1.5044 | 1.32 | 660 | 1.3153 |
| 1.2274 | 1.34 | 670 | 1.3142 |
| 1.2392 | 1.36 | 680 | 1.3150 |
| 1.365 | 1.38 | 690 | 1.2966 |
| 1.3024 | 1.4 | 700 | 1.2688 |
| 1.347 | 1.42 | 710 | 1.2874 |
| 1.3898 | 1.44 | 720 | 1.2543 |
| 1.4256 | 1.46 | 730 | 1.2397 |
| 1.2566 | 1.48 | 740 | 1.2430 |
| 1.2473 | 1.5 | 750 | 1.2135 |
| 1.1466 | 1.52 | 760 | 1.2171 |
| 1.3065 | 1.54 | 770 | 1.1897 |
| 1.3033 | 1.56 | 780 | 1.1646 |
| 1.1166 | 1.58 | 790 | 1.1723 |
| 1.0874 | 1.6 | 800 | 1.1511 |
| 1.017 | 1.62 | 810 | 1.1396 |
| 1.0437 | 1.64 | 820 | 1.1016 |
| 1.2206 | 1.66 | 830 | 1.0841 |
| 0.9738 | 1.68 | 840 | 1.0760 |
| 1.1351 | 1.7 | 850 | 1.0562 |
| 1.0697 | 1.72 | 860 | 1.0556 |
| 1.0296 | 1.74 | 870 | 1.0342 |
| 1.0904 | 1.76 | 880 | 1.0047 |
| 1.01 | 1.78 | 890 | 1.0184 |
| 0.951 | 1.8 | 900 | 0.9845 |
| 1.0111 | 1.82 | 910 | 0.9675 |
| 1.0824 | 1.84 | 920 | 0.9759 |
| 0.9745 | 1.86 | 930 | 0.9336 |
| 0.8632 | 1.88 | 940 | 0.9347 |
| 0.9959 | 1.9 | 950 | 0.9395 |
| 0.8906 | 1.92 | 960 | 0.8965 |
| 1.0552 | 1.94 | 970 | 0.8892 |
| 0.8387 | 1.96 | 980 | 0.8822 |
| 1.0068 | 1.98 | 990 | 0.8805 |
| 1.083 | 2.0 | 1000 | 0.8490 |
| 0.8407 | 2.02 | 1010 | 0.8457 |
| 0.7468 | 2.04 | 1020 | 0.8285 |
| 0.8421 | 2.06 | 1030 | 0.8055 |
| 0.8407 | 2.08 | 1040 | 0.8160 |
| 0.8126 | 2.1 | 1050 | 0.8266 |
| 0.7318 | 2.12 | 1060 | 0.8151 |
| 0.9142 | 2.14 | 1070 | 0.7876 |
| 0.6483 | 2.16 | 1080 | 0.7866 |
| 0.8092 | 2.18 | 1090 | 0.7818 |
| 0.8235 | 2.2 | 1100 | 0.7708 |
| 0.7062 | 2.22 | 1110 | 0.7693 |
| 0.7348 | 2.24 | 1120 | 0.7875 |
| 0.7507 | 2.26 | 1130 | 0.7567 |
| 0.7588 | 2.28 | 1140 | 0.7565 |
| 0.605 | 2.3 | 1150 | 0.7298 |
| 0.8721 | 2.32 | 1160 | 0.7254 |
| 0.6988 | 2.34 | 1170 | 0.7072 |
| 0.6294 | 2.36 | 1180 | 0.7082 |
| 0.7117 | 2.38 | 1190 | 0.7113 |
| 0.8558 | 2.4 | 1200 | 0.6991 |
| 0.6187 | 2.42 | 1210 | 0.6905 |
| 0.6791 | 2.44 | 1220 | 0.6875 |
| 0.5447 | 2.46 | 1230 | 0.6869 |
| 0.7299 | 2.48 | 1240 | 0.6777 |
| 0.5829 | 2.5 | 1250 | 0.6658 |
| 0.6435 | 2.52 | 1260 | 0.6603 |
| 0.7303 | 2.54 | 1270 | 0.6578 |
| 0.7244 | 2.56 | 1280 | 0.6594 |
| 0.6463 | 2.58 | 1290 | 0.6409 |
| 0.7766 | 2.6 | 1300 | 0.6417 |
| 0.6012 | 2.62 | 1310 | 0.6461 |
| 0.5974 | 2.64 | 1320 | 0.6365 |
| 0.556 | 2.66 | 1330 | 0.6301 |
| 0.6369 | 2.68 | 1340 | 0.6247 |
| 0.5699 | 2.7 | 1350 | 0.6163 |
| 0.624 | 2.72 | 1360 | 0.6138 |
| 0.6774 | 2.74 | 1370 | 0.6135 |
| 0.5553 | 2.76 | 1380 | 0.6076 |
| 0.604 | 2.78 | 1390 | 0.5938 |
| 0.6087 | 2.8 | 1400 | 0.5956 |
| 0.5935 | 2.82 | 1410 | 0.5933 |
| 0.6042 | 2.84 | 1420 | 0.5911 |
| 0.6425 | 2.86 | 1430 | 0.5844 |
| 0.6316 | 2.88 | 1440 | 0.5745 |
| 0.597 | 2.9 | 1450 | 0.5695 |
| 0.5754 | 2.92 | 1460 | 0.5704 |
| 0.5197 | 2.94 | 1470 | 0.5697 |
| 0.6256 | 2.96 | 1480 | 0.5596 |
| 0.5818 | 2.98 | 1490 | 0.5599 |
| 0.5464 | 3.01 | 1500 | 0.5565 |
| 0.4616 | 3.03 | 1510 | 0.5629 |
| 0.6482 | 3.05 | 1520 | 0.5529 |
| 0.5356 | 3.07 | 1530 | 0.5526 |
| 0.5688 | 3.09 | 1540 | 0.5528 |
| 0.6018 | 3.11 | 1550 | 0.5408 |
| 0.5794 | 3.13 | 1560 | 0.5371 |
| 0.5443 | 3.15 | 1570 | 0.5375 |
| 0.4435 | 3.17 | 1580 | 0.5345 |
| 0.5087 | 3.19 | 1590 | 0.5293 |
| 0.518 | 3.21 | 1600 | 0.5336 |
| 0.5914 | 3.23 | 1610 | 0.5316 |
| 0.5667 | 3.25 | 1620 | 0.5254 |
| 0.5218 | 3.27 | 1630 | 0.5207 |
| 0.4267 | 3.29 | 1640 | 0.5270 |
| 0.5839 | 3.31 | 1650 | 0.5199 |
| 0.5095 | 3.33 | 1660 | 0.5268 |
| 0.4616 | 3.35 | 1670 | 0.5192 |
| 0.5027 | 3.37 | 1680 | 0.5106 |
| 0.441 | 3.39 | 1690 | 0.5150 |
| 0.4416 | 3.41 | 1700 | 0.5156 |
| 0.4411 | 3.43 | 1710 | 0.5103 |
| 0.47 | 3.45 | 1720 | 0.5038 |
| 0.5079 | 3.47 | 1730 | 0.5048 |
| 0.3913 | 3.49 | 1740 | 0.5082 |
| 0.4977 | 3.51 | 1750 | 0.4976 |
| 0.5905 | 3.53 | 1760 | 0.4975 |
| 0.4362 | 3.55 | 1770 | 0.4962 |
| 0.4309 | 3.57 | 1780 | 0.5008 |
| 0.4477 | 3.59 | 1790 | 0.4988 |
| 0.4826 | 3.61 | 1800 | 0.4886 |
| 0.6181 | 3.63 | 1810 | 0.4885 |
| 0.4738 | 3.65 | 1820 | 0.4879 |
| 0.4932 | 3.67 | 1830 | 0.4818 |
| 0.4684 | 3.69 | 1840 | 0.4812 |
| 0.5484 | 3.71 | 1850 | 0.4767 |
| 0.5086 | 3.73 | 1860 | 0.4791 |
| 0.3548 | 3.75 | 1870 | 0.4793 |
| 0.5229 | 3.77 | 1880 | 0.4765 |
| 0.4578 | 3.79 | 1890 | 0.4704 |
| 0.5277 | 3.81 | 1900 | 0.4691 |
| 0.4683 | 3.83 | 1910 | 0.4649 |
| 0.448 | 3.85 | 1920 | 0.4684 |
| 0.3752 | 3.87 | 1930 | 0.4697 |
| 0.4631 | 3.89 | 1940 | 0.4678 |
| 0.4277 | 3.91 | 1950 | 0.4608 |
| 0.3646 | 3.93 | 1960 | 0.4609 |
| 0.5276 | 3.95 | 1970 | 0.4543 |
| 0.431 | 3.97 | 1980 | 0.4539 |
| 0.5465 | 3.99 | 1990 | 0.4550 |
| 0.4954 | 4.01 | 2000 | 0.4523 |
| 0.4886 | 4.03 | 2010 | 0.4499 |
| 0.4898 | 4.05 | 2020 | 0.4462 |
| 0.4072 | 4.07 | 2030 | 0.4479 |
| 0.4565 | 4.09 | 2040 | 0.4458 |
| 0.3739 | 4.11 | 2050 | 0.4475 |
| 0.4211 | 4.13 | 2060 | 0.4486 |
| 0.4048 | 4.15 | 2070 | 0.4393 |
| 0.5064 | 4.17 | 2080 | 0.4351 |
| 0.4652 | 4.19 | 2090 | 0.4379 |
| 0.4061 | 4.21 | 2100 | 0.4341 |
| 0.3784 | 4.23 | 2110 | 0.4390 |
| 0.4142 | 4.25 | 2120 | 0.4354 |
| 0.3625 | 4.27 | 2130 | 0.4415 |
| 0.3807 | 4.29 | 2140 | 0.4403 |
| 0.4154 | 4.31 | 2150 | 0.4308 |
| 0.4509 | 4.33 | 2160 | 0.4298 |
| 0.4254 | 4.35 | 2170 | 0.4239 |
| 0.4323 | 4.37 | 2180 | 0.4214 |
| 0.4359 | 4.39 | 2190 | 0.4291 |
| 0.3759 | 4.41 | 2200 | 0.4224 |
| 0.4534 | 4.43 | 2210 | 0.4225 |
| 0.4013 | 4.45 | 2220 | 0.4262 |
| 0.4331 | 4.47 | 2230 | 0.4214 |
| 0.4373 | 4.49 | 2240 | 0.4198 |
| 0.4975 | 4.51 | 2250 | 0.4236 |
| 0.423 | 4.53 | 2260 | 0.4189 |
| 0.4503 | 4.55 | 2270 | 0.4171 |
| 0.3796 | 4.57 | 2280 | 0.4172 |
| 0.4063 | 4.59 | 2290 | 0.4125 |
| 0.3841 | 4.61 | 2300 | 0.4119 |
| 0.2956 | 4.63 | 2310 | 0.4147 |
| 0.3486 | 4.65 | 2320 | 0.4246 |
| 0.3585 | 4.67 | 2330 | 0.4117 |
| 0.4496 | 4.69 | 2340 | 0.4091 |
| 0.399 | 4.71 | 2350 | 0.4049 |
| 0.3885 | 4.73 | 2360 | 0.4004 |
| 0.3728 | 4.75 | 2370 | 0.4003 |
| 0.2698 | 4.77 | 2380 | 0.4009 |
| 0.3799 | 4.79 | 2390 | 0.4003 |
| 0.4888 | 4.81 | 2400 | 0.3974 |
| 0.3795 | 4.83 | 2410 | 0.3995 |
| 0.4249 | 4.85 | 2420 | 0.3968 |
| 0.4635 | 4.87 | 2430 | 0.4001 |
| 0.4965 | 4.89 | 2440 | 0.3934 |
| 0.3745 | 4.91 | 2450 | 0.3987 |
| 0.3601 | 4.93 | 2460 | 0.3986 |
| 0.2878 | 4.95 | 2470 | 0.3941 |
| 0.4297 | 4.97 | 2480 | 0.3890 |
| 0.278 | 4.99 | 2490 | 0.3975 |
| 0.4509 | 5.01 | 2500 | 0.3907 |
| 0.3202 | 5.03 | 2510 | 0.3872 |
| 0.3047 | 5.05 | 2520 | 0.3956 |
| 0.2931 | 5.07 | 2530 | 0.3925 |
| 0.3487 | 5.09 | 2540 | 0.3910 |
| 0.2792 | 5.11 | 2550 | 0.3901 |
| 0.3446 | 5.13 | 2560 | 0.3873 |
| 0.3482 | 5.15 | 2570 | 0.3840 |
| 0.3464 | 5.17 | 2580 | 0.3835 |
| 0.3212 | 5.19 | 2590 | 0.3846 |
| 0.3847 | 5.21 | 2600 | 0.3819 |
| 0.3212 | 5.23 | 2610 | 0.3897 |
| 0.358 | 5.25 | 2620 | 0.3811 |
| 0.3471 | 5.27 | 2630 | 0.3805 |
| 0.3348 | 5.29 | 2640 | 0.3868 |
| 0.342 | 5.31 | 2650 | 0.3769 |
| 0.4504 | 5.33 | 2660 | 0.3774 |
| 0.2713 | 5.35 | 2670 | 0.3803 |
| 0.3848 | 5.37 | 2680 | 0.3776 |
| 0.354 | 5.39 | 2690 | 0.3758 |
| 0.3796 | 5.41 | 2700 | 0.3760 |
| 0.3654 | 5.43 | 2710 | 0.3737 |
| 0.3448 | 5.45 | 2720 | 0.3812 |
| 0.355 | 5.47 | 2730 | 0.3759 |
| 0.288 | 5.49 | 2740 | 0.3711 |
| 0.2991 | 5.51 | 2750 | 0.3691 |
| 0.3443 | 5.53 | 2760 | 0.3708 |
| 0.3374 | 5.55 | 2770 | 0.3659 |
| 0.4078 | 5.57 | 2780 | 0.3709 |
| 0.2967 | 5.59 | 2790 | 0.3683 |
| 0.3532 | 5.61 | 2800 | 0.3638 |
| 0.4123 | 5.63 | 2810 | 0.3642 |
| 0.3195 | 5.65 | 2820 | 0.3655 |
| 0.3161 | 5.67 | 2830 | 0.3599 |
| 0.4152 | 5.69 | 2840 | 0.3621 |
| 0.2802 | 5.71 | 2850 | 0.3648 |
| 0.2909 | 5.73 | 2860 | 0.3604 |
| 0.3105 | 5.75 | 2870 | 0.3604 |
| 0.3291 | 5.77 | 2880 | 0.3553 |
| 0.3916 | 5.79 | 2890 | 0.3603 |
| 0.3657 | 5.81 | 2900 | 0.3544 |
| 0.3745 | 5.83 | 2910 | 0.3559 |
| 0.3281 | 5.85 | 2920 | 0.3517 |
| 0.2892 | 5.87 | 2930 | 0.3551 |
| 0.4121 | 5.89 | 2940 | 0.3489 |
| 0.2908 | 5.91 | 2950 | 0.3532 |
| 0.3677 | 5.93 | 2960 | 0.3469 |
| 0.341 | 5.95 | 2970 | 0.3503 |
| 0.2319 | 5.97 | 2980 | 0.3497 |
| 0.2624 | 5.99 | 2990 | 0.3468 |
| 0.3324 | 6.01 | 3000 | 0.3480 |
| 0.2114 | 6.03 | 3010 | 0.3530 |
| 0.256 | 6.05 | 3020 | 0.3501 |
| 0.2716 | 6.07 | 3030 | 0.3490 |
| 0.2921 | 6.09 | 3040 | 0.3466 |
| 0.2924 | 6.11 | 3050 | 0.3531 |
| 0.3267 | 6.13 | 3060 | 0.3455 |
| 0.3488 | 6.15 | 3070 | 0.3428 |
| 0.301 | 6.17 | 3080 | 0.3455 |
| 0.2656 | 6.19 | 3090 | 0.3450 |
| 0.2377 | 6.21 | 3100 | 0.3474 |
| 0.2344 | 6.23 | 3110 | 0.3461 |
| 0.2816 | 6.25 | 3120 | 0.3489 |
| 0.2675 | 6.27 | 3130 | 0.3427 |
| 0.3315 | 6.29 | 3140 | 0.3393 |
| 0.335 | 6.31 | 3150 | 0.3406 |
| 0.2418 | 6.33 | 3160 | 0.3385 |
| 0.215 | 6.35 | 3170 | 0.3393 |
| 0.2279 | 6.37 | 3180 | 0.3427 |
| 0.2907 | 6.39 | 3190 | 0.3379 |
| 0.2184 | 6.41 | 3200 | 0.3438 |
| 0.3484 | 6.43 | 3210 | 0.3364 |
| 0.2327 | 6.45 | 3220 | 0.3406 |
| 0.2571 | 6.47 | 3230 | 0.3400 |
| 0.2864 | 6.49 | 3240 | 0.3367 |
| 0.2383 | 6.51 | 3250 | 0.3377 |
| 0.187 | 6.53 | 3260 | 0.3346 |
| 0.2453 | 6.55 | 3270 | 0.3349 |
| 0.296 | 6.57 | 3280 | 0.3339 |
| 0.2601 | 6.59 | 3290 | 0.3335 |
| 0.2927 | 6.61 | 3300 | 0.3340 |
| 0.2796 | 6.63 | 3310 | 0.3303 |
| 0.2393 | 6.65 | 3320 | 0.3351 |
| 0.2764 | 6.67 | 3330 | 0.3288 |
| 0.2547 | 6.69 | 3340 | 0.3327 |
| 0.3247 | 6.71 | 3350 | 0.3279 |
| 0.3217 | 6.73 | 3360 | 0.3283 |
| 0.2881 | 6.75 | 3370 | 0.3307 |
| 0.2897 | 6.77 | 3380 | 0.3281 |
| 0.3096 | 6.79 | 3390 | 0.3257 |
| 0.2463 | 6.81 | 3400 | 0.3244 |
| 0.2404 | 6.83 | 3410 | 0.3254 |
| 0.2907 | 6.85 | 3420 | 0.3227 |
| 0.2749 | 6.87 | 3430 | 0.3226 |
| 0.2262 | 6.89 | 3440 | 0.3226 |
| 0.2799 | 6.91 | 3450 | 0.3233 |
| 0.2764 | 6.93 | 3460 | 0.3198 |
| 0.2644 | 6.95 | 3470 | 0.3231 |
| 0.2733 | 6.97 | 3480 | 0.3188 |
| 0.2861 | 6.99 | 3490 | 0.3192 |
| 0.1757 | 7.01 | 3500 | 0.3243 |
| 0.2588 | 7.03 | 3510 | 0.3238 |
| 0.2132 | 7.05 | 3520 | 0.3207 |
| 0.2787 | 7.07 | 3530 | 0.3272 |
| 0.2786 | 7.09 | 3540 | 0.3229 |
| 0.2854 | 7.11 | 3550 | 0.3232 |
| 0.1982 | 7.13 | 3560 | 0.3237 |
| 0.2022 | 7.15 | 3570 | 0.3254 |
| 0.2592 | 7.17 | 3580 | 0.3258 |
| 0.2299 | 7.19 | 3590 | 0.3207 |
| 0.2054 | 7.21 | 3600 | 0.3197 |
| 0.208 | 7.23 | 3610 | 0.3216 |
| 0.2432 | 7.25 | 3620 | 0.3228 |
| 0.2452 | 7.27 | 3630 | 0.3181 |
| 0.264 | 7.29 | 3640 | 0.3238 |
| 0.2019 | 7.31 | 3650 | 0.3178 |
| 0.2299 | 7.33 | 3660 | 0.3218 |
| 0.2465 | 7.35 | 3670 | 0.3172 |
| 0.2466 | 7.37 | 3680 | 0.3167 |
| 0.2824 | 7.39 | 3690 | 0.3143 |
| 0.2314 | 7.41 | 3700 | 0.3143 |
| 0.2822 | 7.43 | 3710 | 0.3143 |
| 0.2254 | 7.45 | 3720 | 0.3139 |
| 0.2454 | 7.47 | 3730 | 0.3218 |
| 0.2656 | 7.49 | 3740 | 0.3116 |
| 0.2172 | 7.51 | 3750 | 0.3154 |
| 0.2408 | 7.53 | 3760 | 0.3127 |
| 0.1761 | 7.55 | 3770 | 0.3149 |
| 0.2232 | 7.57 | 3780 | 0.3114 |
| 0.2902 | 7.59 | 3790 | 0.3136 |
| 0.2485 | 7.61 | 3800 | 0.3146 |
| 0.1901 | 7.63 | 3810 | 0.3094 |
| 0.2962 | 7.65 | 3820 | 0.3120 |
| 0.2093 | 7.67 | 3830 | 0.3133 |
| 0.368 | 7.69 | 3840 | 0.3064 |
| 0.2849 | 7.71 | 3850 | 0.3091 |
| 0.1948 | 7.73 | 3860 | 0.3075 |
| 0.2241 | 7.75 | 3870 | 0.3078 |
| 0.1935 | 7.77 | 3880 | 0.3045 |
| 0.2045 | 7.79 | 3890 | 0.3065 |
| 0.159 | 7.81 | 3900 | 0.3082 |
| 0.1714 | 7.83 | 3910 | 0.3057 |
| 0.1984 | 7.85 | 3920 | 0.3059 |
| 0.2397 | 7.87 | 3930 | 0.3037 |
| 0.1884 | 7.89 | 3940 | 0.3054 |
| 0.2585 | 7.91 | 3950 | 0.3030 |
| 0.2476 | 7.93 | 3960 | 0.3058 |
| 0.2525 | 7.95 | 3970 | 0.3033 |
| 0.2001 | 7.97 | 3980 | 0.3062 |
| 0.1985 | 7.99 | 3990 | 0.3039 |
| 0.1984 | 8.02 | 4000 | 0.3139 |
| 0.2008 | 8.04 | 4010 | 0.3099 |
| 0.2159 | 8.06 | 4020 | 0.3085 |
| 0.2305 | 8.08 | 4030 | 0.3108 |
| 0.2007 | 8.1 | 4040 | 0.3050 |
| 0.2124 | 8.12 | 4050 | 0.3115 |
| 0.1435 | 8.14 | 4060 | 0.3084 |
| 0.1968 | 8.16 | 4070 | 0.3087 |
| 0.2507 | 8.18 | 4080 | 0.3084 |
| 0.1703 | 8.2 | 4090 | 0.3061 |
| 0.2511 | 8.22 | 4100 | 0.3106 |
| 0.1698 | 8.24 | 4110 | 0.3134 |
| 0.2518 | 8.26 | 4120 | 0.3101 |
| 0.1489 | 8.28 | 4130 | 0.3090 |
| 0.1759 | 8.3 | 4140 | 0.3098 |
| 0.1939 | 8.32 | 4150 | 0.3056 |
| 0.2168 | 8.34 | 4160 | 0.3106 |
| 0.2119 | 8.36 | 4170 | 0.3051 |
| 0.1793 | 8.38 | 4180 | 0.3056 |
| 0.2434 | 8.4 | 4190 | 0.3050 |
| 0.2601 | 8.42 | 4200 | 0.3065 |
| 0.1791 | 8.44 | 4210 | 0.3051 |
| 0.1404 | 8.46 | 4220 | 0.3058 |
| 0.222 | 8.48 | 4230 | 0.3059 |
| 0.1809 | 8.5 | 4240 | 0.3070 |
| 0.1745 | 8.52 | 4250 | 0.3066 |
| 0.2236 | 8.54 | 4260 | 0.3012 |
| 0.1965 | 8.56 | 4270 | 0.3037 |
| 0.1836 | 8.58 | 4280 | 0.3051 |
| 0.1912 | 8.6 | 4290 | 0.3017 |
| 0.2207 | 8.62 | 4300 | 0.3025 |
| 0.2481 | 8.64 | 4310 | 0.2997 |
| 0.1506 | 8.66 | 4320 | 0.3003 |
| 0.2216 | 8.68 | 4330 | 0.3035 |
| 0.1866 | 8.7 | 4340 | 0.3014 |
| 0.2025 | 8.72 | 4350 | 0.3035 |
| 0.1521 | 8.74 | 4360 | 0.2992 |
| 0.1598 | 8.76 | 4370 | 0.3034 |
| 0.185 | 8.78 | 4380 | 0.3017 |
| 0.2427 | 8.8 | 4390 | 0.2972 |
| 0.2343 | 8.82 | 4400 | 0.2979 |
| 0.1994 | 8.84 | 4410 | 0.2994 |
| 0.2671 | 8.86 | 4420 | 0.2986 |
| 0.1158 | 8.88 | 4430 | 0.2991 |
| 0.2127 | 8.9 | 4440 | 0.3000 |
| 0.1691 | 8.92 | 4450 | 0.2981 |
| 0.2103 | 8.94 | 4460 | 0.2979 |
| 0.1392 | 8.96 | 4470 | 0.2982 |
| 0.1712 | 8.98 | 4480 | 0.2943 |
| 0.2435 | 9.0 | 4490 | 0.2958 |
| 0.1715 | 9.02 | 4500 | 0.3055 |
| 0.1641 | 9.04 | 4510 | 0.3048 |
| 0.1529 | 9.06 | 4520 | 0.3029 |
| 0.1566 | 9.08 | 4530 | 0.3047 |
| 0.1382 | 9.1 | 4540 | 0.3027 |
| 0.1605 | 9.12 | 4550 | 0.3023 |
| 0.2167 | 9.14 | 4560 | 0.3055 |
| 0.1506 | 9.16 | 4570 | 0.3037 |
| 0.192 | 9.18 | 4580 | 0.3039 |
| 0.139 | 9.2 | 4590 | 0.3030 |
| 0.1974 | 9.22 | 4600 | 0.3038 |
| 0.167 | 9.24 | 4610 | 0.3037 |
| 0.2409 | 9.26 | 4620 | 0.3034 |
| 0.1494 | 9.28 | 4630 | 0.3048 |
| 0.1762 | 9.3 | 4640 | 0.3037 |
| 0.183 | 9.32 | 4650 | 0.3042 |
| 0.1773 | 9.34 | 4660 | 0.3043 |
| 0.1509 | 9.36 | 4670 | 0.3053 |
| 0.1994 | 9.38 | 4680 | 0.3045 |
| 0.1928 | 9.4 | 4690 | 0.3036 |
| 0.1158 | 9.42 | 4700 | 0.3038 |
| 0.1503 | 9.44 | 4710 | 0.3019 |
| 0.1556 | 9.46 | 4720 | 0.3029 |
| 0.1327 | 9.48 | 4730 | 0.3050 |
| 0.1772 | 9.5 | 4740 | 0.3057 |
| 0.1555 | 9.52 | 4750 | 0.3028 |
| 0.1363 | 9.54 | 4760 | 0.3014 |
| 0.139 | 9.56 | 4770 | 0.3010 |
| 0.1639 | 9.58 | 4780 | 0.3013 |
| 0.1669 | 9.6 | 4790 | 0.3015 |
| 0.144 | 9.62 | 4800 | 0.3023 |
| 0.1925 | 9.64 | 4810 | 0.3034 |
| 0.1615 | 9.66 | 4820 | 0.3025 |
| 0.1625 | 9.68 | 4830 | 0.3019 |
| 0.1355 | 9.7 | 4840 | 0.3023 |
| 0.1671 | 9.72 | 4850 | 0.3019 |
| 0.1447 | 9.74 | 4860 | 0.3021 |
| 0.1465 | 9.76 | 4870 | 0.3024 |
| 0.1794 | 9.78 | 4880 | 0.3021 |
| 0.156 | 9.8 | 4890 | 0.3011 |
| 0.1018 | 9.82 | 4900 | 0.3005 |
| 0.1403 | 9.84 | 4910 | 0.3011 |
| 0.1126 | 9.86 | 4920 | 0.3006 |
| 0.1595 | 9.88 | 4930 | 0.3007 |
| 0.1415 | 9.9 | 4940 | 0.3012 |
| 0.1651 | 9.92 | 4950 | 0.3015 |
| 0.1558 | 9.94 | 4960 | 0.3015 |
| 0.1734 | 9.96 | 4970 | 0.3014 |
| 0.1909 | 9.98 | 4980 | 0.3014 |
| 0.1246 | 10.0 | 4990 | 0.3014 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu116
- Datasets 2.10.0
- Tokenizers 0.12.1
|
datasistah/qlora_falcon_20230622
|
datasistah
| 2023-06-22T21:58:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T21:53:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
Guilherme34/Jennifer-falcon-7b-qlora
|
Guilherme34
| 2023-06-22T21:55:07Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T16:46:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
Mykcy33/ernie-1.0-base-zh-laure-swag
|
Mykcy33
| 2023-06-22T21:42:16Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"ernie",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-06-22T21:19:46Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ernie-1.0-base-zh-laure-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie-1.0-base-zh-laure-swag
This model is a fine-tuned version of [nghuyong/ernie-1.0-base-zh](https://huggingface.co/nghuyong/ernie-1.0-base-zh) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0037
- Accuracy: 0.8000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 1.0264 | 0.7600 |
| No log | 2.0 | 14 | 0.9992 | 0.75 |
| No log | 3.0 | 21 | 1.0037 | 0.8000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
consciousAI/cai-lunaris-text-embeddings
|
consciousAI
| 2023-06-22T21:33:52Z | 395 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-22T18:08:54Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: cai-lunaris-text-embeddings
results:
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.07
- type: map_at_10
value: 29.372999999999998
- type: map_at_100
value: 30.79
- type: map_at_1000
value: 30.819999999999997
- type: map_at_3
value: 24.395
- type: map_at_5
value: 27.137
- type: mrr_at_1
value: 17.923000000000002
- type: mrr_at_10
value: 29.695
- type: mrr_at_100
value: 31.098
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 24.704
- type: mrr_at_5
value: 27.449
- type: ndcg_at_1
value: 17.07
- type: ndcg_at_10
value: 37.269000000000005
- type: ndcg_at_100
value: 43.716
- type: ndcg_at_1000
value: 44.531
- type: ndcg_at_3
value: 26.839000000000002
- type: ndcg_at_5
value: 31.845000000000002
- type: precision_at_1
value: 17.07
- type: precision_at_10
value: 6.3020000000000005
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 11.309
- type: precision_at_5
value: 9.246
- type: recall_at_1
value: 17.07
- type: recall_at_10
value: 63.016000000000005
- type: recall_at_100
value: 92.24799999999999
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 33.926
- type: recall_at_5
value: 46.23
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 53.44266265900711
- type: mrr
value: 66.54695950402322
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 75.9652953730204
- type: cos_sim_spearman
value: 73.96554077670989
- type: euclidean_pearson
value: 75.68477255792381
- type: euclidean_spearman
value: 74.59447076995703
- type: manhattan_pearson
value: 75.94984623881341
- type: manhattan_spearman
value: 74.72218452337502
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.119000000000002
- type: map_at_10
value: 19.661
- type: map_at_100
value: 20.706
- type: map_at_1000
value: 20.848
- type: map_at_3
value: 17.759
- type: map_at_5
value: 18.645
- type: mrr_at_1
value: 17.166999999999998
- type: mrr_at_10
value: 23.313
- type: mrr_at_100
value: 24.263
- type: mrr_at_1000
value: 24.352999999999998
- type: mrr_at_3
value: 21.412
- type: mrr_at_5
value: 22.313
- type: ndcg_at_1
value: 17.166999999999998
- type: ndcg_at_10
value: 23.631
- type: ndcg_at_100
value: 28.427000000000003
- type: ndcg_at_1000
value: 31.862000000000002
- type: ndcg_at_3
value: 20.175
- type: ndcg_at_5
value: 21.397
- type: precision_at_1
value: 17.166999999999998
- type: precision_at_10
value: 4.549
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 9.68
- type: precision_at_5
value: 6.981
- type: recall_at_1
value: 14.119000000000002
- type: recall_at_10
value: 32.147999999999996
- type: recall_at_100
value: 52.739999999999995
- type: recall_at_1000
value: 76.67
- type: recall_at_3
value: 22.019
- type: recall_at_5
value: 25.361
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.576
- type: map_at_10
value: 22.281000000000002
- type: map_at_100
value: 23.066
- type: map_at_1000
value: 23.166
- type: map_at_3
value: 20.385
- type: map_at_5
value: 21.557000000000002
- type: mrr_at_1
value: 20.892
- type: mrr_at_10
value: 26.605
- type: mrr_at_100
value: 27.229
- type: mrr_at_1000
value: 27.296
- type: mrr_at_3
value: 24.809
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 20.892
- type: ndcg_at_10
value: 26.092
- type: ndcg_at_100
value: 29.398999999999997
- type: ndcg_at_1000
value: 31.884
- type: ndcg_at_3
value: 23.032
- type: ndcg_at_5
value: 24.634
- type: precision_at_1
value: 20.892
- type: precision_at_10
value: 4.885
- type: precision_at_100
value: 0.818
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 10.977
- type: precision_at_5
value: 8.013
- type: recall_at_1
value: 16.576
- type: recall_at_10
value: 32.945
- type: recall_at_100
value: 47.337
- type: recall_at_1000
value: 64.592
- type: recall_at_3
value: 24.053
- type: recall_at_5
value: 28.465
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.604
- type: map_at_10
value: 28.754999999999995
- type: map_at_100
value: 29.767
- type: map_at_1000
value: 29.852
- type: map_at_3
value: 26.268
- type: map_at_5
value: 27.559
- type: mrr_at_1
value: 24.326
- type: mrr_at_10
value: 31.602000000000004
- type: mrr_at_100
value: 32.46
- type: mrr_at_1000
value: 32.521
- type: mrr_at_3
value: 29.415000000000003
- type: mrr_at_5
value: 30.581000000000003
- type: ndcg_at_1
value: 24.326
- type: ndcg_at_10
value: 33.335
- type: ndcg_at_100
value: 38.086
- type: ndcg_at_1000
value: 40.319
- type: ndcg_at_3
value: 28.796
- type: ndcg_at_5
value: 30.758999999999997
- type: precision_at_1
value: 24.326
- type: precision_at_10
value: 5.712
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.208
- type: precision_at_5
value: 9.329
- type: recall_at_1
value: 20.604
- type: recall_at_10
value: 44.505
- type: recall_at_100
value: 65.866
- type: recall_at_1000
value: 82.61800000000001
- type: recall_at_3
value: 31.794
- type: recall_at_5
value: 36.831
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.280999999999999
- type: map_at_10
value: 11.636000000000001
- type: map_at_100
value: 12.363
- type: map_at_1000
value: 12.469
- type: map_at_3
value: 10.415000000000001
- type: map_at_5
value: 11.144
- type: mrr_at_1
value: 9.266
- type: mrr_at_10
value: 12.838
- type: mrr_at_100
value: 13.608999999999998
- type: mrr_at_1000
value: 13.700999999999999
- type: mrr_at_3
value: 11.507000000000001
- type: mrr_at_5
value: 12.343
- type: ndcg_at_1
value: 9.266
- type: ndcg_at_10
value: 13.877
- type: ndcg_at_100
value: 18.119
- type: ndcg_at_1000
value: 21.247
- type: ndcg_at_3
value: 11.376999999999999
- type: ndcg_at_5
value: 12.675
- type: precision_at_1
value: 9.266
- type: precision_at_10
value: 2.226
- type: precision_at_100
value: 0.47200000000000003
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 4.859
- type: precision_at_5
value: 3.6380000000000003
- type: recall_at_1
value: 8.280999999999999
- type: recall_at_10
value: 19.872999999999998
- type: recall_at_100
value: 40.585
- type: recall_at_1000
value: 65.225
- type: recall_at_3
value: 13.014000000000001
- type: recall_at_5
value: 16.147
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.1209999999999996
- type: map_at_10
value: 7.272
- type: map_at_100
value: 8.079
- type: map_at_1000
value: 8.199
- type: map_at_3
value: 6.212
- type: map_at_5
value: 6.736000000000001
- type: mrr_at_1
value: 5.721
- type: mrr_at_10
value: 9.418
- type: mrr_at_100
value: 10.281
- type: mrr_at_1000
value: 10.385
- type: mrr_at_3
value: 8.126
- type: mrr_at_5
value: 8.779
- type: ndcg_at_1
value: 5.721
- type: ndcg_at_10
value: 9.673
- type: ndcg_at_100
value: 13.852999999999998
- type: ndcg_at_1000
value: 17.546999999999997
- type: ndcg_at_3
value: 7.509
- type: ndcg_at_5
value: 8.373
- type: precision_at_1
value: 5.721
- type: precision_at_10
value: 2.04
- type: precision_at_100
value: 0.48
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 4.022
- type: precision_at_5
value: 3.06
- type: recall_at_1
value: 4.1209999999999996
- type: recall_at_10
value: 15.201
- type: recall_at_100
value: 33.922999999999995
- type: recall_at_1000
value: 61.529999999999994
- type: recall_at_3
value: 8.869
- type: recall_at_5
value: 11.257
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.09
- type: map_at_10
value: 19.573999999999998
- type: map_at_100
value: 20.580000000000002
- type: map_at_1000
value: 20.704
- type: map_at_3
value: 17.68
- type: map_at_5
value: 18.64
- type: mrr_at_1
value: 17.227999999999998
- type: mrr_at_10
value: 23.152
- type: mrr_at_100
value: 24.056
- type: mrr_at_1000
value: 24.141000000000002
- type: mrr_at_3
value: 21.142
- type: mrr_at_5
value: 22.201
- type: ndcg_at_1
value: 17.227999999999998
- type: ndcg_at_10
value: 23.39
- type: ndcg_at_100
value: 28.483999999999998
- type: ndcg_at_1000
value: 31.709
- type: ndcg_at_3
value: 19.883
- type: ndcg_at_5
value: 21.34
- type: precision_at_1
value: 17.227999999999998
- type: precision_at_10
value: 4.3790000000000004
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 9.496
- type: precision_at_5
value: 6.872
- type: recall_at_1
value: 14.09
- type: recall_at_10
value: 31.580000000000002
- type: recall_at_100
value: 54.074
- type: recall_at_1000
value: 77.092
- type: recall_at_3
value: 21.601
- type: recall_at_5
value: 25.333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.538
- type: map_at_10
value: 15.75
- type: map_at_100
value: 16.71
- type: map_at_1000
value: 16.838
- type: map_at_3
value: 13.488
- type: map_at_5
value: 14.712
- type: mrr_at_1
value: 13.813
- type: mrr_at_10
value: 19.08
- type: mrr_at_100
value: 19.946
- type: mrr_at_1000
value: 20.044
- type: mrr_at_3
value: 16.838
- type: mrr_at_5
value: 17.951
- type: ndcg_at_1
value: 13.813
- type: ndcg_at_10
value: 19.669
- type: ndcg_at_100
value: 24.488
- type: ndcg_at_1000
value: 27.87
- type: ndcg_at_3
value: 15.479000000000001
- type: ndcg_at_5
value: 17.229
- type: precision_at_1
value: 13.813
- type: precision_at_10
value: 3.916
- type: precision_at_100
value: 0.743
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.534000000000001
- type: precision_at_5
value: 5.822
- type: recall_at_1
value: 10.538
- type: recall_at_10
value: 28.693
- type: recall_at_100
value: 50.308
- type: recall_at_1000
value: 74.44
- type: recall_at_3
value: 16.866999999999997
- type: recall_at_5
value: 21.404999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.044583333333332
- type: map_at_10
value: 15.682833333333335
- type: map_at_100
value: 16.506500000000003
- type: map_at_1000
value: 16.623833333333334
- type: map_at_3
value: 14.130833333333333
- type: map_at_5
value: 14.963583333333332
- type: mrr_at_1
value: 13.482833333333332
- type: mrr_at_10
value: 18.328500000000002
- type: mrr_at_100
value: 19.095416666666665
- type: mrr_at_1000
value: 19.18241666666666
- type: mrr_at_3
value: 16.754749999999998
- type: mrr_at_5
value: 17.614749999999997
- type: ndcg_at_1
value: 13.482833333333332
- type: ndcg_at_10
value: 18.81491666666667
- type: ndcg_at_100
value: 22.946833333333334
- type: ndcg_at_1000
value: 26.061083333333336
- type: ndcg_at_3
value: 15.949333333333332
- type: ndcg_at_5
value: 17.218333333333334
- type: precision_at_1
value: 13.482833333333332
- type: precision_at_10
value: 3.456583333333333
- type: precision_at_100
value: 0.6599166666666666
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 7.498833333333332
- type: precision_at_5
value: 5.477166666666667
- type: recall_at_1
value: 11.044583333333332
- type: recall_at_10
value: 25.737750000000005
- type: recall_at_100
value: 44.617916666666666
- type: recall_at_1000
value: 67.56524999999999
- type: recall_at_3
value: 17.598249999999997
- type: recall_at_5
value: 20.9035
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.362
- type: map_at_10
value: 13.414000000000001
- type: map_at_100
value: 14.083000000000002
- type: map_at_1000
value: 14.168
- type: map_at_3
value: 12.098
- type: map_at_5
value: 12.803999999999998
- type: mrr_at_1
value: 11.043
- type: mrr_at_10
value: 15.158
- type: mrr_at_100
value: 15.845999999999998
- type: mrr_at_1000
value: 15.916
- type: mrr_at_3
value: 13.88
- type: mrr_at_5
value: 14.601
- type: ndcg_at_1
value: 11.043
- type: ndcg_at_10
value: 16.034000000000002
- type: ndcg_at_100
value: 19.686
- type: ndcg_at_1000
value: 22.188
- type: ndcg_at_3
value: 13.530000000000001
- type: ndcg_at_5
value: 14.704
- type: precision_at_1
value: 11.043
- type: precision_at_10
value: 2.791
- type: precision_at_100
value: 0.5
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 6.237
- type: precision_at_5
value: 4.5089999999999995
- type: recall_at_1
value: 9.362
- type: recall_at_10
value: 22.396
- type: recall_at_100
value: 39.528999999999996
- type: recall_at_1000
value: 58.809
- type: recall_at_3
value: 15.553
- type: recall_at_5
value: 18.512
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.657
- type: map_at_10
value: 8.273
- type: map_at_100
value: 8.875
- type: map_at_1000
value: 8.977
- type: map_at_3
value: 7.32
- type: map_at_5
value: 7.792000000000001
- type: mrr_at_1
value: 7.02
- type: mrr_at_10
value: 9.966999999999999
- type: mrr_at_100
value: 10.636
- type: mrr_at_1000
value: 10.724
- type: mrr_at_3
value: 8.872
- type: mrr_at_5
value: 9.461
- type: ndcg_at_1
value: 7.02
- type: ndcg_at_10
value: 10.199
- type: ndcg_at_100
value: 13.642000000000001
- type: ndcg_at_1000
value: 16.643
- type: ndcg_at_3
value: 8.333
- type: ndcg_at_5
value: 9.103
- type: precision_at_1
value: 7.02
- type: precision_at_10
value: 1.8929999999999998
- type: precision_at_100
value: 0.43
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 3.843
- type: precision_at_5
value: 2.884
- type: recall_at_1
value: 5.657
- type: recall_at_10
value: 14.563
- type: recall_at_100
value: 30.807000000000002
- type: recall_at_1000
value: 53.251000000000005
- type: recall_at_3
value: 9.272
- type: recall_at_5
value: 11.202
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.671999999999999
- type: map_at_10
value: 14.651
- type: map_at_100
value: 15.406
- type: map_at_1000
value: 15.525
- type: map_at_3
value: 13.461
- type: map_at_5
value: 14.163
- type: mrr_at_1
value: 12.407
- type: mrr_at_10
value: 16.782
- type: mrr_at_100
value: 17.562
- type: mrr_at_1000
value: 17.653
- type: mrr_at_3
value: 15.47
- type: mrr_at_5
value: 16.262
- type: ndcg_at_1
value: 12.407
- type: ndcg_at_10
value: 17.251
- type: ndcg_at_100
value: 21.378
- type: ndcg_at_1000
value: 24.689
- type: ndcg_at_3
value: 14.915000000000001
- type: ndcg_at_5
value: 16.1
- type: precision_at_1
value: 12.407
- type: precision_at_10
value: 2.91
- type: precision_at_100
value: 0.573
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 6.779
- type: precision_at_5
value: 4.888
- type: recall_at_1
value: 10.671999999999999
- type: recall_at_10
value: 23.099
- type: recall_at_100
value: 41.937999999999995
- type: recall_at_1000
value: 66.495
- type: recall_at_3
value: 16.901
- type: recall_at_5
value: 19.807
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.364
- type: map_at_10
value: 17.772
- type: map_at_100
value: 18.659
- type: map_at_1000
value: 18.861
- type: map_at_3
value: 16.659
- type: map_at_5
value: 17.174
- type: mrr_at_1
value: 16.996
- type: mrr_at_10
value: 21.687
- type: mrr_at_100
value: 22.313
- type: mrr_at_1000
value: 22.422
- type: mrr_at_3
value: 20.652
- type: mrr_at_5
value: 21.146
- type: ndcg_at_1
value: 16.996
- type: ndcg_at_10
value: 21.067
- type: ndcg_at_100
value: 24.829
- type: ndcg_at_1000
value: 28.866999999999997
- type: ndcg_at_3
value: 19.466
- type: ndcg_at_5
value: 19.993
- type: precision_at_1
value: 16.996
- type: precision_at_10
value: 4.071000000000001
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 9.223
- type: precision_at_5
value: 6.4030000000000005
- type: recall_at_1
value: 13.364
- type: recall_at_10
value: 25.976
- type: recall_at_100
value: 44.134
- type: recall_at_1000
value: 73.181
- type: recall_at_3
value: 20.503
- type: recall_at_5
value: 22.409000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.151
- type: map_at_10
value: 9.155000000000001
- type: map_at_100
value: 9.783999999999999
- type: map_at_1000
value: 9.879
- type: map_at_3
value: 7.825
- type: map_at_5
value: 8.637
- type: mrr_at_1
value: 5.915
- type: mrr_at_10
value: 10.34
- type: mrr_at_100
value: 10.943999999999999
- type: mrr_at_1000
value: 11.033
- type: mrr_at_3
value: 8.934000000000001
- type: mrr_at_5
value: 9.812
- type: ndcg_at_1
value: 5.915
- type: ndcg_at_10
value: 11.561
- type: ndcg_at_100
value: 14.971
- type: ndcg_at_1000
value: 17.907999999999998
- type: ndcg_at_3
value: 8.896999999999998
- type: ndcg_at_5
value: 10.313
- type: precision_at_1
value: 5.915
- type: precision_at_10
value: 2.1069999999999998
- type: precision_at_100
value: 0.414
- type: precision_at_1000
value: 0.074
- type: precision_at_3
value: 4.128
- type: precision_at_5
value: 3.327
- type: recall_at_1
value: 5.151
- type: recall_at_10
value: 17.874000000000002
- type: recall_at_100
value: 34.174
- type: recall_at_1000
value: 56.879999999999995
- type: recall_at_3
value: 10.732999999999999
- type: recall_at_5
value: 14.113000000000001
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.101
- type: map_at_10
value: 5.434
- type: map_at_100
value: 6.267
- type: map_at_1000
value: 6.418
- type: map_at_3
value: 4.377000000000001
- type: map_at_5
value: 4.841
- type: mrr_at_1
value: 7.166
- type: mrr_at_10
value: 12.012
- type: mrr_at_100
value: 13.144
- type: mrr_at_1000
value: 13.229
- type: mrr_at_3
value: 9.826
- type: mrr_at_5
value: 10.921
- type: ndcg_at_1
value: 7.166
- type: ndcg_at_10
value: 8.687000000000001
- type: ndcg_at_100
value: 13.345
- type: ndcg_at_1000
value: 16.915
- type: ndcg_at_3
value: 6.276
- type: ndcg_at_5
value: 7.013
- type: precision_at_1
value: 7.166
- type: precision_at_10
value: 2.9250000000000003
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 4.734
- type: precision_at_5
value: 3.8830000000000005
- type: recall_at_1
value: 3.101
- type: recall_at_10
value: 11.774999999999999
- type: recall_at_100
value: 28.819
- type: recall_at_1000
value: 49.886
- type: recall_at_3
value: 5.783
- type: recall_at_5
value: 7.692
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.758
- type: map_at_10
value: 5.507
- type: map_at_100
value: 7.1819999999999995
- type: map_at_1000
value: 7.652
- type: map_at_3
value: 4.131
- type: map_at_5
value: 4.702
- type: mrr_at_1
value: 28.499999999999996
- type: mrr_at_10
value: 37.693
- type: mrr_at_100
value: 38.657000000000004
- type: mrr_at_1000
value: 38.704
- type: mrr_at_3
value: 34.792
- type: mrr_at_5
value: 36.417
- type: ndcg_at_1
value: 20.625
- type: ndcg_at_10
value: 14.771999999999998
- type: ndcg_at_100
value: 16.821
- type: ndcg_at_1000
value: 21.546000000000003
- type: ndcg_at_3
value: 16.528000000000002
- type: ndcg_at_5
value: 15.573
- type: precision_at_1
value: 28.499999999999996
- type: precision_at_10
value: 12.25
- type: precision_at_100
value: 3.7600000000000002
- type: precision_at_1000
value: 0.86
- type: precision_at_3
value: 19.167
- type: precision_at_5
value: 16.25
- type: recall_at_1
value: 2.758
- type: recall_at_10
value: 9.164
- type: recall_at_100
value: 21.022
- type: recall_at_1000
value: 37.053999999999995
- type: recall_at_3
value: 5.112
- type: recall_at_5
value: 6.413
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.53554681148413
- type: mrr
value: 29.290078704990325
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 76.52926207453477
- type: cos_sim_spearman
value: 68.98528351149498
- type: euclidean_pearson
value: 73.7744559091218
- type: euclidean_spearman
value: 69.03481995814735
- type: manhattan_pearson
value: 73.72818267270651
- type: manhattan_spearman
value: 69.00576442086793
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 61.71540153163407
- type: cos_sim_spearman
value: 58.502746406116614
- type: euclidean_pearson
value: 60.82817999438477
- type: euclidean_spearman
value: 58.988494433752756
- type: manhattan_pearson
value: 60.87147859170236
- type: manhattan_spearman
value: 59.03527382025516
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 72.89990498692094
- type: cos_sim_spearman
value: 74.03028513377879
- type: euclidean_pearson
value: 73.8252088833803
- type: euclidean_spearman
value: 74.15554246478399
- type: manhattan_pearson
value: 73.80947397334666
- type: manhattan_spearman
value: 74.13117958176566
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 70.67974206005906
- type: cos_sim_spearman
value: 66.18263558486296
- type: euclidean_pearson
value: 69.5048876024341
- type: euclidean_spearman
value: 66.36380457878391
- type: manhattan_pearson
value: 69.4895372451589
- type: manhattan_spearman
value: 66.36941569935124
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 73.99856913569187
- type: cos_sim_spearman
value: 75.54712054246464
- type: euclidean_pearson
value: 74.55692573876115
- type: euclidean_spearman
value: 75.34499056740096
- type: manhattan_pearson
value: 74.59342318869683
- type: manhattan_spearman
value: 75.35708317926819
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 72.3343670787494
- type: cos_sim_spearman
value: 73.7136650302399
- type: euclidean_pearson
value: 73.86004257913046
- type: euclidean_spearman
value: 73.9557418048638
- type: manhattan_pearson
value: 73.78919091538661
- type: manhattan_spearman
value: 73.86316425954108
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.08159601556619
- type: cos_sim_spearman
value: 80.13910828685532
- type: euclidean_pearson
value: 79.39197806617453
- type: euclidean_spearman
value: 79.85692277871196
- type: manhattan_pearson
value: 79.32452246324705
- type: manhattan_spearman
value: 79.70120373587193
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.29720207747786
- type: cos_sim_spearman
value: 65.65260681394685
- type: euclidean_pearson
value: 64.49002165983158
- type: euclidean_spearman
value: 65.25917651158736
- type: manhattan_pearson
value: 64.49981108236335
- type: manhattan_spearman
value: 65.20426825202405
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 71.1871068550574
- type: cos_sim_spearman
value: 71.40167034949341
- type: euclidean_pearson
value: 72.2373684855404
- type: euclidean_spearman
value: 71.90255429812984
- type: manhattan_pearson
value: 72.23173532049509
- type: manhattan_spearman
value: 71.87843489689064
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 68.65000574464773
- type: mrr
value: 88.29363084265044
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 40.76107749144358
- type: mrr
value: 41.03689202953908
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.68520527813894
- type: cos_sim_spearman
value: 29.017620841627433
- type: dot_pearson
value: 29.25380949876322
- type: dot_spearman
value: 29.33885250837327
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
hongrui/mammogram_v_2_1
|
hongrui
| 2023-06-22T21:30:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-22T10:29:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
catrabbitbear/Reinforce-cartpole-2
|
catrabbitbear
| 2023-06-22T21:07:54Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T21:07:45Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ParisNeo/lollms-personalities-zoo
|
ParisNeo
| 2023-06-22T20:44:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-21T14:22:50Z |
# lollms_personalities_zoo
Lord of LLMS personalities zoo
|
Andre-M/Taxi-1
|
Andre-M
| 2023-06-22T20:36:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T20:36:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Andre-M/Taxi-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
serpapi/bert-base-local-results
|
serpapi
| 2023-06-22T20:16:07Z | 115 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"scraping",
"parsing",
"serp",
"api",
"opensource",
"en",
"dataset:serpapi/local-results-en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T21:53:30Z |
---
language:
- en
pipeline_tag: text-classification
widget:
- title: Rating Example
text: '4.7'
- title: Reviews Example
text: (188)
- title: Reviews Example 2
text: '188'
- title: Reviews Example 3
text: No Reviews
- title: Price Example
text: $
- title: Type Example
text: Coffee shop
- title: Address Example
text: Frederick, MD
- title: Address Example 2
text: 552 W 48th St
- title: Address Example 3
text: In Hilton Hotel
- title: Hours Example
text: Closed
- title: Hours Example 2
text: Opens 7 AM Fri
- title: Hours Example 3
text: Permanently closed
- title: Service Option Example
text: Dine-in
- title: Service Option Example 2
text: Takeout
- title: Service Option Example 3
text: Delivery
- title: Phone Example
text: (301) 000-0000
- title: Years In Business Example
text: 5+ Years in Business
- title: Button Text Example
text: Directions
- title: Description Example
text: 'Provides: Auto maintenance'
license: mit
datasets:
- serpapi/local-results-en
tags:
- scraping
- parsing
- serp
- api
- opensource
---
<h1 align="center">BERT-Based Classification Model for Google Local Listings</h1>
<p align="center">
<img src="https://camo.githubusercontent.com/6c920f0b551360ca3257308e0f3547fe538496b9cb332d6a208992030abf6c3d/68747470733a2f2f736572706170692e636f6d2f616e64726f69642d6368726f6d652d353132783531322e706e67" alt="The Logo of SerpApi" width="200" height="200">
</p>
<p align="center">
This repository contains a BERT-based classification model developed using the Hugging Face library, and a dataset gathered by <a href='https://serpapi.com/google-local-api'>SerpApi's Google Local API</a>. The model is designed to classify different texts extracted from Google Local Listings.
</p>
<p align="center">
You may check out the blog post explaining the model's usecase with an example: <a href="https://serpapi.com/blog/real-world-example-of-ai-powered-parsing/">Real World Example of AI Powered Parsing</a>.
</p>
<p align="center">
You may also check out the Open Source Github Repository that contains the source code of a Ruby Gem called <a href="https://github.com/serpapi/google-local-results-ai-parser">`google-local-results-ai-parser`</a>.
</p>
---
<h2 align="center">Usage and Classification for Parsing</h2>
<p align="center">
The example code below represents using it Python with Inference API for prototyping. You may use different programming languages for calling the results, and you may parallelize your work. Prototyping endpoint will have limited amount of calls. For <code>Production Purposes</code> or <code>Large Prototyping Activities</code>, consider setting an <code>Inference API Endpoint from Huggingface</code>, or a <code>Private API Server</code> for serving the model.
</p>
```py
API_URL = "https://api-inference.huggingface.co/models/serpapi/bert-base-local-results"
headers = {"Authorization": "Bearer xxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "5540 N Lamar Blvd #12, Austin, TX 78756, United States",
})
```
```
Output: address
```
---
<h2 align="center">Strong Features</h2>
<div align="center">
<p>The BERT-based model excels in the following areas:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Differentiating difficult semantic similarities with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"No Reviews"</code> → <code>reviews</code></li>
<li><code>"(5K+)"</code> → <code>reviews</code></li>
</ul>
</li>
<li><strong>Handling partial texts that can be combined later</strong>
<ul style="list-style-type: disc;">
<li><code>"Open ⋅ Closes 5 pm"</code>
<ul style="list-style-type: circle;">
<li><code>"Open"</code> → <code>hours</code></li>
<li><code>"Closes 5 pm"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li><strong>Handling Vocabulary from diverse areas with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"Doctor"</code> → <code>type</code></li>
<li><code>"Restaurant"</code> → <code>type</code></li>
</ul>
</li>
<li><strong>Returning Assurance Score for After-Correction</strong>
<ul style="list-style-type: disc;">
<li><code>"4.7"</code> → <code>rating(0.999)</code></li>
</ul>
</li>
<li><strong>Strong Against Grammatical Mistakes</strong>
<ul style="list-style-type: disc;">
<li><code>"Krebside Pickup"</code> → <code>service options</code></li>
</ul>
</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Parts Covered and Corresponding Keys in SerpApi Parsers</h2>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Type of Place:</strong> <code>type</code></li>
<li><strong>Number of Reviews:</strong> <code>reviews</code></li>
<li><strong>Phone Number:</strong> <code>phone</code></li>
<li><strong>Rating:</strong> <code>rating</code></li>
<li><strong>Address:</strong> <code>address</code></li>
<li><strong>Operating Hours:</strong> <code>hours</code></li>
<li><strong>Description or Descriptive Review:</strong> <code>description</code></li>
<li><strong>Expensiveness:</strong> <code>expensiveness</code></li>
<li><strong>Service Options:</strong> <code>service options</code></li>
<li><strong>Button Text:</strong> <code>links</code></li>
<li><strong>Years in Business:</strong> <code>years_in_business</code></li>
</ul>
</div>
</div>
</ul>
</div>
<p align="center">
Please refer to the documentation of SerpApi's Google Local API and Google Local Pack API for more details on different parts:
</p>
<div align="center">
<strong>References:</strong>
<ul style="text-align: center; list-style-position: inside;">
<li>SerpApi's Google Local API: <a href ="https://serpapi.com/google-local-api">https://serpapi.com/google-local-api</a></li>
<li>SerpApi's Google Local Pack API: <a href="https://serpapi.com/local-pack">https://serpapi.com/local-pack</a></li>
</ul>
</div>
---
<h2 align="center">Known Limitations</h2>
<div align="center">
<p>The model has a few limitations that should be taken into account:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li>The model does not classify the title of a place. This is because the title often contains many elements that can be easily confused with other parts, even for a human eye.</li>
<li>The <code>label</code> key is not covered by the model, as it can be easily handled with traditional code.</li>
<li>In some cases, <code>button text</code> could be classified as <code>service options</code> or <code>address</code>. However, this can be easily avoided by checking if a text is in a button in the traditional part of the code. The button text is only used to prevent emergent cases.
<ul style="list-style-type: circle">
<li><code>"Delivery"</code> → <code>service options [Correct Label is button text]</code></li>
<li><code>"Share"</code> → <code>address [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify a portion of the <code>description</code> as <code>hours</code> if the description is about operating hours. For example:
<ul style="list-style-type: disc;">
<li><code>"Drive through: Open ⋅ Closes 12 AM"</code>
<ul style="list-style-type: circle">
<li><code>"Drive through: Open"</code> → <code>description</code></li>
<li><code>"Closes 12 AM"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li>In some cases, the model may classify some <code>description</code> as <code>type</code>. This is because some <code>description</code> do look like <code>type</code>. For Example:
<ul style="list-style-type: circle">
<li><code>"Iconic Seattle-based coffeehouse chain"</code> → <code>type [Correct Label is description]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>rating</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Expand more"</code> → <code>hours [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>type</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Takeaway"</code> → <code>type [Correct Label is service options]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>hours</code> or <code>price</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"(1.4K)"</code> → <code>rating [Correct Label is reviews]</code></li>
<li><code>"(1.6K)"</code> → <code>price [Correct Label is reviews]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>description</code> or <code>type</code>. The reason for the confusion on <code>description</code> is because of a recent change in their categorization in SerpApi keys. The data contains labels prior to that. For Example:
<ul style="list-style-type: circle">
<li><code>"On-site services"</code> → <code>type [Correct Label is service options]</code></li>
<li><code>"Online appointments"</code> → <code>description [Correct Label is service options]</code></li>
</ul>
</li>
<li>The model may be susceptible to error in one word entries. This is a minority of the cases, and it could be fixed with assurance scores. For Example:
<ul style="list-style-type: circle">
<li><code>"Sushi"</code> → <code>address(0.984), type(0.0493) [Correct Label is type]</code></li>
<li><code>"Diagorou 4"</code> → <code>address(0.999) [Correct address in same listing]</code></li>
</ul>
</li>
<li>The model cannot differentiate between extra parts that are extracted in SerpApi's Google Local API and Google Local Pack API. These parts are not feasible to extract via Classification Models.</li>
<li>The model is not designed for Listings outside English Language.</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Disclaimer</h2>
<p align="center">We value full transparency and painful honesty both in our internal and external communications. We believe a world with complete and open transparency is a better world.</p>
<p align="center">
However, while we strive for transparency, there are certain situations where sharing specific datasets may not be feasible or advisable. In the case of the dataset used to train our model, which contains different parts of a Google Local Listing including addresses and phone numbers, we have made a careful decision not to share it. We prioritize the well-being and safety of individuals, and sharing this dataset could potentially cause harm to people whose personal information is included.
</p>
<p align="center">
Protecting the privacy and security of individuals is of utmost importance to us. Disclosing personal information, such as addresses and phone numbers, without proper consent or safeguards could lead to privacy violations, identity theft, harassment, or other forms of misuse. Our commitment to responsible data usage means that we handle sensitive information with great care and take appropriate measures to ensure its protection.
</p>
<p align="center">
While we understand the value of transparency, we also recognize the need to strike a balance between transparency and safeguarding individuals' privacy and security. In this particular case, the potential harm that could result from sharing the dataset outweighs the benefits of complete transparency. By prioritizing privacy, we aim to create a safer and more secure environment for all individuals involved.
</p>
<p align="center">
We appreciate your understanding and support in our commitment to responsible and ethical data practices. If you have any further questions or concerns, please feel free to reach out to us.
</p>
|
LeoDog896/asl-letters-yolov8
|
LeoDog896
| 2023-06-22T20:07:09Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-22T18:55:42Z |
---
license: mit
---
# asl-letters-yolov8
YOLOv8n model trained to detect the 26 letters of the American Sign Language Alphabet.
## Dataset
The [dataset](https://universe.roboflow.com/meredith-lo-pmqx7/asl-project) is from roboflow.
|
SlyEcho/open_llama_3b_ggml
|
SlyEcho
| 2023-06-22T20:02:38Z | 0 | 28 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-05-30T09:04:06Z |
---
license: apache-2.0
---
# ggml versions of OpenLLaMa 3B
- Version: 1T tokens final version
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama)
- Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 607(ffb06a3) or later
## Use with llama.cpp
Support is now merged to master branch.
## Newer quantizations
There are now more quantization types in llama.cpp, some lower than 4 bits.
Currently these are not supported, maybe because some weights have shapes that don't divide by 256.
## Perplexity on wiki.test.raw
| Q | chunk | 600BT | 1000BT |
| --: | --: | --: | --: |
| F16 | [616] | 8.4656 | 7.7861 |
| Q8_0 | [616] | 8.4667 | 7.7874 |
| Q5_1 | [616] | 8.5072 | 7.8424 |
| Q5_0 | [616] | 8.5156 | 7.8474 |
| Q4_1 | [616] | 8.6102 | 8.0483 |
| Q4_0 | [616] | 8.6674 | 8.0962 |
|
emozilla/open_llama_7b-scaled
|
emozilla
| 2023-06-22T19:59:17Z | 8 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T18:45:58Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
This is a modified version of the original LLaMA model that incorporates Scaled Rotary Embeddings first proposed by [kaiokendev](https://kaiokendev.github.io/). By default, the model is configured to be equivalent to the original OpenLLaMA model (2048 context length). To modify, instantiate the LLaMA configuration and set `max_position_embeddings` to the desired context length. The value should be a power of 2, e.g. 2048, 4096, 8192, etc.
```python
config = AutoConfig.from_pretrained("emozilla/open_llama_7b-scaled", \
trust_remote_code=True)
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained("emozilla/open_llama_7b-scaled", \
config=config, trust_remote_code=True)
```
You should also set `max_model_length` on your tokenizer.
```python
tokenizer = AutoTokenizer.from_pretrained("emozilla/open_llama_7b-scaled", max_model_length=8192)
```
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
sid/q-FrozenLake-v1-4x4-noSlippery
|
sid
| 2023-06-22T19:36:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:36:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sid/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bonzo1971/roberta-base-bne-finetuned-amazon_reviews_multi
|
bonzo1971
| 2023-06-22T19:20:38Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T18:59:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1943 | 1.0 | 1250 | 0.1669 | 0.9327 |
| 0.0982 | 2.0 | 2500 | 0.2219 | 0.9333 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GEMCorp/Reinforce-CartPole-v1
|
GEMCorp
| 2023-06-22T19:12:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:11:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** (i.e Monte Carlo Policy Gradient) agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JoaoYukio/ppo-Huggy
|
JoaoYukio
| 2023-06-22T19:09:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:59:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoaoYukio/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gosorio/IMDB_HF-Tutorial
|
gosorio
| 2023-06-22T18:50:04Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T17:16:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_HF-Tutorial
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_HF-Tutorial
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2314 | 1.0 | 1563 | 0.1846 | 0.9301 |
| 0.1483 | 2.0 | 3126 | 0.2337 | 0.9316 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
valerio-unifei/ppo-Huggy
|
valerio-unifei
| 2023-06-22T18:44:53Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:44:46Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: valerio-unifei/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
glidingrabidtrout/blockassist
|
glidingrabidtrout
| 2025-09-12T11:15:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy leaping hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T11:15:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy leaping hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
somtonorbert/blockassist
|
somtonorbert
| 2025-09-12T12:30:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:14:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AlexCrypto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_untamed_wolf
|
AlexCrypto
| 2025-09-12T12:30:55Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am powerful untamed wolf",
"trl",
"genrl-swarm",
"I am powerful_untamed_wolf",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T05:57:21Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_untamed_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am powerful untamed wolf
- trl
- genrl-swarm
- I am powerful_untamed_wolf
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_untamed_wolf
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlexCrypto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_untamed_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cryptotransparent-solo/huggingface/runs/a58amrow)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
GennadiiS/blockassist
|
GennadiiS
| 2025-09-12T12:30:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious short dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:50:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious short dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
insanesaga/Qwen3-0.6B-Gensyn-Swarm-nocturnal_clawed_bison
|
insanesaga
| 2025-09-12T12:30:38Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am nocturnal_clawed_bison",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-26T19:48:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am nocturnal_clawed_bison
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aahmad246/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_domestic_cat
|
aahmad246
| 2025-09-12T12:30:30Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hoarse_domestic_cat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T00:12:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hoarse_domestic_cat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
w24tgd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove
|
w24tgd
| 2025-09-12T12:30:18Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am padded peaceful dove",
"trl",
"genrl-swarm",
"I am padded_peaceful_dove",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-21T13:51:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am padded peaceful dove
- trl
- genrl-swarm
- I am padded_peaceful_dove
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="w24tgd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
|
Kapitaka
| 2025-09-12T12:30:13Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tawny meek cheetah",
"trl",
"genrl-swarm",
"I am tawny_meek_cheetah",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-09T17:08:56Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tawny meek cheetah
- trl
- genrl-swarm
- I am tawny_meek_cheetah
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Dania19862017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_nocturnal_zebra
|
Dania19862017
| 2025-09-12T12:30:12Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am unseen_nocturnal_zebra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T15:36:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am unseen_nocturnal_zebra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shnepsik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-restless_fluffy_ocelot
|
Shnepsik
| 2025-09-12T12:30:08Z | 208 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am restless_fluffy_ocelot",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:11:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am restless_fluffy_ocelot
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jjsprockel/medgemma27b-luad-qlora
|
jjsprockel
| 2025-09-12T12:30:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"peft",
"lora",
"qlora",
"vision-language",
"histopathology",
"image-text-to-text",
"conversational",
"en",
"base_model:google/medgemma-27b-it",
"base_model:adapter:google/medgemma-27b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-12T02:18:17Z |
---
language:
- en
tags:
- peft
- lora
- qlora
- vision-language
- histopathology
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
base_model: google/medgemma-27b-it
---
# medgemma27b-luad-qlora
**LoRA adapters (QLoRA, 4-bit)** para la identificación automática de **subtipos de adenocarcinoma de pulmón**, entrenados sobre el modelo base **google/medgemma-27b-it**.
---
## 🔎 Descripción
Este repositorio contiene los **adapters LoRA** obtenidos mediante ajuste fino (QLoRA) con una base de datos de **1194 casos de adenocarcinoma de pulmón** anotados y distribuidos en los subtipos reconocidos por la **Clasificación de la OMS 2021**:contentReference[oaicite:0]{index=0}:
- Lepidic
- Acinar
- Papillary
- Micropapillary
- Solid
- Invasive mucinous
- Colloid
- Fetal
- Enteric
La anotación y validación de los casos se realizó siguiendo criterios histológicos y citológicos descritos en la literatura:contentReference[oaicite:1]{index=1}, excluyendo imágenes con artefactos o sin subtipo identificable.
---
## ⚙️ Uso
Ejemplo mínimo para cargar los adapters y realizar inferencia:
```python
from transformers import AutoModelForImageTextToText, AutoProcessor, BitsAndBytesConfig
from peft import PeftModel
import torch
from PIL import Image
base_id = "google/medgemma-27b-it"
adapter_id = "jjsprockel/medgemma27b-luad-qlora"
bnb_cfg = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base = AutoModelForImageTextToText.from_pretrained(
base_id,
quantization_config=bnb_cfg,
device_map={"": "cuda"},
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True
)
model = PeftModel.from_pretrained(base, adapter_id).eval()
processor = AutoProcessor.from_pretrained(base_id)
# Ejemplo de inferencia
img = Image.open("example.png").convert("RGB")
subtypes = ["lepidic","acinar","papillary","micropapillary","solid","invasive mucinous","colloid","fetal","enteric"]
system = "You are an expert pulmonary pathologist. Return ONLY JSON with key 'subtype' strictly from: " + ", ".join(subtypes) + "."
user = "Predict the subtype for this H&E lung adenocarcinoma patch. Only JSON."
messages = [
{"role":"system","content":[{"type":"text","text":system}]},
{"role":"user","content":[{"type":"text","text":user},{"type":"image","image":img}]}
]
templ = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
enc = processor(text=templ, images=img, return_tensors="pt")
inputs = {
"input_ids": enc["input_ids"].to(model.device),
"attention_mask": enc["attention_mask"].to(model.device),
"pixel_values": enc["pixel_values"].to(model.device, dtype=torch.bfloat16),
}
with torch.inference_mode(), torch.amp.autocast("cuda", dtype=torch.bfloat16):
out = model.generate(**inputs, max_new_tokens=32, do_sample=False)[0]
gen = out[inputs["input_ids"].shape[-1]:]
decoded = processor.decode(gen, skip_special_tokens=True)
print(decoded)
|
kismunah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra
|
kismunah
| 2025-09-12T12:29:59Z | 73 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am robust tame zebra",
"trl",
"genrl-swarm",
"I am robust_tame_zebra",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T15:26:57Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am robust tame zebra
- trl
- genrl-swarm
- I am robust_tame_zebra
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kismunah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Choco1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-chattering_furry_sloth
|
Choco1994
| 2025-09-12T12:29:56Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am chattering_furry_sloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T00:58:11Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am chattering_furry_sloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Linkdenin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feline_horned_leopard
|
Linkdenin
| 2025-09-12T12:29:52Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am feline_horned_leopard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T14:14:28Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am feline_horned_leopard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
|
AlexanderArtT
| 2025-09-12T12:29:47Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tiny nimble warthog",
"trl",
"genrl-swarm",
"I am tiny_nimble_warthog",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-13T22:11:38Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tiny nimble warthog
- trl
- genrl-swarm
- I am tiny_nimble_warthog
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/SurMuy_v1_512512201-i1-GGUF
|
mradermacher
| 2025-09-12T12:29:46Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-12T09:49:09Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/AingHongsin/SurMuy_v1_512512201
|
billionaire1/Qwen3-0.6B-Gensyn-Swarm-quick_gregarious_fox
|
billionaire1
| 2025-09-12T12:29:34Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am quick_gregarious_fox",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T06:51:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am quick_gregarious_fox
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joker009/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_mighty_ox
|
joker009
| 2025-09-12T12:29:30Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am unseen_mighty_ox",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T20:17:23Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am unseen_mighty_ox
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nopokkizu/Qwen3-0.6B-Gensyn-Swarm-vocal_scurrying_tarantula
|
nopokkizu
| 2025-09-12T12:29:22Z | 132 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_scurrying_tarantula",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-06T15:20:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_scurrying_tarantula
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tukino18540/Qwen3-0.6B-Gensyn-Swarm-agile_marine_dingo
|
tukino18540
| 2025-09-12T12:29:19Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am agile_marine_dingo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-17T08:12:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am agile_marine_dingo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dilip025/llama-2-7b
|
dilip025
| 2025-09-12T12:29:11Z | 589 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-03-02T17:03:29Z |
---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B Chat
arxiv: 2307.09288
base_model: meta-llama/Llama-2-7b-chat-hf
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are NutriLife chatbot, you are going to get questions related to food, nutrition, health, and diet by the users from Nepal. Answer them very shortly and accurately if the message is only about food, nutrition, and diet. Otherwise, ignore.
<</SYS>>
{prompt}[/INST]
'
quantized_by: Dilip Pokhrel
---
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B Chat -- Food and Nutrition
<br>
- Model creator: [Meta Llama 2]
<br>
- Original model: [Llama 2 7B Chat] <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf">Original Model</a>
<br>
- Fine Tuned by: [Dilip Pokhrel] <a href="https://dilippokhrel.com.np">Profile</a>
#### Simple example code to load one of these GGUF models
```python
# Load model directly or use qunatization technique if you have low gpu ram
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dilip025/llama-2-7b")
model = AutoModelForCausalLM.from_pretrained("dilip025/llama-2-7b")
system_message = 'You are NutriLife chatbot, you are going to get questions related to food, nutrition, health, and diet by the users from Nepal. Answer them very shortly and accurately if the message is only about food, nutrition, and diet. Otherwise, ignore.'
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n Tell me some of the famous Nepali food recipes [/INST]"
num_new_tokens = 200 # Change to the number of new tokens you want to generate
# Count the number of tokens in the prompt
num_prompt_tokens = len(tokenizer(prompt)['input_ids'])
# Calculate the maximum length for the generation
max_length = num_prompt_tokens + num_new_tokens
gen = pipeline('text-generation', model=model, tokenizer=tokenizer, max_length=max_length)
result = gen(prompt)
print(result[0]['generated_text'].replace(prompt, ''))
```
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
|
mradermacher/MiroThinker-14B-DPO-v0.2-i1-GGUF
|
mradermacher
| 2025-09-12T12:29:00Z | 0 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-09-12T12:28:39Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/miromind-ai/MiroThinker-14B-DPO-v0.2
|
1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
|
1245erty
| 2025-09-12T12:28:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jumping lithe scorpion",
"unsloth",
"trl",
"genrl-swarm",
"I am jumping_lithe_scorpion",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-20T16:38:45Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am jumping lithe scorpion
- unsloth
- trl
- genrl-swarm
- I am jumping_lithe_scorpion
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Neooot/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_horned_macaw
|
Neooot
| 2025-09-12T12:28:52Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am whiskered_horned_macaw",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:50:17Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am whiskered_horned_macaw
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
|
fty7i
| 2025-09-12T12:28:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive powerful koala",
"unsloth",
"trl",
"genrl-swarm",
"I am pensive_powerful_koala",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-21T07:46:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive powerful koala
- unsloth
- trl
- genrl-swarm
- I am pensive_powerful_koala
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LBST/t01_piper_pick_and_place_bimanual_diffusion
|
LBST
| 2025-09-12T12:28:47Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:gauravpradeep/t01_piper_pick_and_place_bimanual_lerobot",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-12T12:28:35Z |
---
datasets: gauravpradeep/t01_piper_pick_and_place_bimanual_lerobot
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- diffusion
- robotics
- lerobot
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
AlexCryptan/Qwen3-0.6B-Gensyn-Swarm-hardy_sneaky_mule
|
AlexCryptan
| 2025-09-12T12:28:40Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hardy_sneaky_mule",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T22:25:58Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hardy_sneaky_mule
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mirkanemre/ComfyUI
|
mirkanemre
| 2025-09-12T12:28:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-12T12:27:38Z |
<div align="center">
# ComfyUI
**The most powerful and modular visual AI engine and application.**
[![Website][website-shield]][website-url]
[![Dynamic JSON Badge][discord-shield]][discord-url]
[![Twitter][twitter-shield]][twitter-url]
[![Matrix][matrix-shield]][matrix-url]
<br>
[![][github-release-shield]][github-release-link]
[![][github-release-date-shield]][github-release-link]
[![][github-downloads-shield]][github-downloads-link]
[![][github-downloads-latest-shield]][github-downloads-link]
[matrix-shield]: https://img.shields.io/badge/Matrix-000000?style=flat&logo=matrix&logoColor=white
[matrix-url]: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
[website-shield]: https://img.shields.io/badge/ComfyOrg-4285F4?style=flat
[website-url]: https://www.comfy.org/
<!-- Workaround to display total user from https://github.com/badges/shields/issues/4500#issuecomment-2060079995 -->
[discord-shield]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fcomfyorg%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total
[discord-url]: https://www.comfy.org/discord
[twitter-shield]: https://img.shields.io/twitter/follow/ComfyUI
[twitter-url]: https://x.com/ComfyUI
[github-release-shield]: https://img.shields.io/github/v/release/comfyanonymous/ComfyUI?style=flat&sort=semver
[github-release-link]: https://github.com/comfyanonymous/ComfyUI/releases
[github-release-date-shield]: https://img.shields.io/github/release-date/comfyanonymous/ComfyUI?style=flat
[github-downloads-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/total?style=flat
[github-downloads-latest-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/latest/total?style=flat&label=downloads%40latest
[github-downloads-link]: https://github.com/comfyanonymous/ComfyUI/releases

</div>
ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.
## Get Started
#### [Desktop Application](https://www.comfy.org/download)
- The easiest way to get started.
- Available on Windows & macOS.
#### [Windows Portable Package](#installing)
- Get the latest commits and completely portable.
- Available on Windows.
#### [Manual Install](#manual-install-windows-linux)
Supports all operating systems and GPU types (NVIDIA, AMD, Intel, Apple Silicon, Ascend).
## [Examples](https://comfyanonymous.github.io/ComfyUI_examples/)
See what ComfyUI can do with the [example workflows](https://comfyanonymous.github.io/ComfyUI_examples/).
## Features
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Image Models
- SD1.x, SD2.x ([unCLIP](https://comfyanonymous.github.io/ComfyUI_examples/unclip/))
- [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
- [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
- [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
- Pixart Alpha and Sigma
- [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
- [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
- [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
- [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/)
- [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/)
- [Qwen Image](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/)
- Image Editing Models
- [Omnigen 2](https://comfyanonymous.github.io/ComfyUI_examples/omnigen/)
- [Flux Kontext](https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-kontext-image-editing-model)
- [HiDream E1.1](https://comfyanonymous.github.io/ComfyUI_examples/hidream/#hidream-e11)
- [Qwen Image Edit](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/#edit-model)
- Video Models
- [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
- [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
- [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
- [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/)
- [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/)
- [Wan 2.2](https://comfyanonymous.github.io/ComfyUI_examples/wan22/)
- Audio Models
- [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
- [ACE Step](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
- 3D Models
- [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run large models on GPUs with as low as 1GB vram with smart offloading.
- Works even if you don't have a GPU with: ```--cpu``` (slow)
- Can load ckpt and safetensors: All in one checkpoints or standalone diffusion models, VAEs and CLIP models.
- Safe loading of ckpt, pt, pth, etc.. files.
- Embeddings/Textual inversion
- [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/)
- [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/)
- Loading full workflows (with seeds) from generated PNG, WebP and FLAC files.
- Saving/Loading workflows as Json files.
- Nodes interface can be used to create complex workflows like one for [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) or much more advanced ones.
- [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/)
- [Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) with both regular and inpainting models.
- [ControlNet and T2I-Adapter](https://comfyanonymous.github.io/ComfyUI_examples/controlnet/)
- [Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/)
- [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
- [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
- [LCM models and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
- Latent previews with [TAESD](#how-to-show-high-quality-previews)
- Works fully offline: core will never download anything unless you want to.
- Optional API nodes to use paid models from external providers through the online [Comfy API](https://docs.comfy.org/tutorials/api-nodes/overview).
- [Config file](extra_model_paths.yaml.example) to set the search paths for models.
Workflow examples can be found on the [Examples page](https://comfyanonymous.github.io/ComfyUI_examples/)
## Release Process
ComfyUI follows a weekly release cycle targeting Friday but this regularly changes because of model releases or large changes to the codebase. There are three interconnected repositories:
1. **[ComfyUI Core](https://github.com/comfyanonymous/ComfyUI)**
- Releases a new stable version (e.g., v0.7.0)
- Serves as the foundation for the desktop release
2. **[ComfyUI Desktop](https://github.com/Comfy-Org/desktop)**
- Builds a new release using the latest stable core version
3. **[ComfyUI Frontend](https://github.com/Comfy-Org/ComfyUI_frontend)**
- Weekly frontend updates are merged into the core repository
- Features are frozen for the upcoming core release
- Development continues for the next release cycle
## Shortcuts
| Keybind | Explanation |
|------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| `Ctrl` + `Enter` | Queue up current graph for generation |
| `Ctrl` + `Shift` + `Enter` | Queue up current graph as first for generation |
| `Ctrl` + `Alt` + `Enter` | Cancel current generation |
| `Ctrl` + `Z`/`Ctrl` + `Y` | Undo/Redo |
| `Ctrl` + `S` | Save workflow |
| `Ctrl` + `O` | Load workflow |
| `Ctrl` + `A` | Select all nodes |
| `Alt `+ `C` | Collapse/uncollapse selected nodes |
| `Ctrl` + `M` | Mute/unmute selected nodes |
| `Ctrl` + `B` | Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) |
| `Delete`/`Backspace` | Delete selected nodes |
| `Ctrl` + `Backspace` | Delete the current graph |
| `Space` | Move the canvas around when held and moving the cursor |
| `Ctrl`/`Shift` + `Click` | Add clicked node to selection |
| `Ctrl` + `C`/`Ctrl` + `V` | Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) |
| `Ctrl` + `C`/`Ctrl` + `Shift` + `V` | Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) |
| `Shift` + `Drag` | Move multiple selected nodes at the same time |
| `Ctrl` + `D` | Load default graph |
| `Alt` + `+` | Canvas Zoom in |
| `Alt` + `-` | Canvas Zoom out |
| `Ctrl` + `Shift` + LMB + Vertical drag | Canvas Zoom in/out |
| `P` | Pin/Unpin selected nodes |
| `Ctrl` + `G` | Group selected nodes |
| `Q` | Toggle visibility of the queue |
| `H` | Toggle visibility of history |
| `R` | Refresh graph |
| `F` | Show/Hide menu |
| `.` | Fit view to selection (Whole graph when nothing is selected) |
| Double-Click LMB | Open node quick search palette |
| `Shift` + Drag | Move multiple wires at once |
| `Ctrl` + `Alt` + LMB | Disconnect all wires from clicked slot |
`Ctrl` can also be replaced with `Cmd` instead for macOS users
# Installing
## Windows Portable
There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
### [Direct link to download](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_nvidia.7z)
Simply download, extract with [7-Zip](https://7-zip.org) and run. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints
If you have trouble extracting it, right click the file -> properties -> unblock
#### How do I share models between another UI and ComfyUI?
See the [Config file](extra_model_paths.yaml.example) to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.
## [comfy-cli](https://docs.comfy.org/comfy-cli/getting-started)
You can install and start ComfyUI using comfy-cli:
```bash
pip install comfy-cli
comfy install
```
## Manual Install (Windows, Linux)
Python 3.13 is very well supported. If you have trouble with some custom node dependencies you can try 3.12
Git clone this repo.
Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints
Put your VAE in: models/vae
### AMD GPUs (Linux only)
AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4```
This is the command to install the nightly with ROCm 6.4 which might have some performance improvements:
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.4```
### Intel GPUs (Windows and Linux)
(Option 1) Intel Arc GPU users can install native PyTorch with torch.xpu support using pip. More information can be found [here](https://pytorch.org/docs/main/notes/get_start_xpu.html)
1. To install PyTorch xpu, use the following command:
```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu```
This is the command to install the Pytorch xpu nightly which might have some performance improvements:
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu```
(Option 2) Alternatively, Intel GPUs supported by Intel Extension for PyTorch (IPEX) can leverage IPEX for improved performance.
1. visit [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) for more information.
### NVIDIA
Nvidia users should install stable pytorch using this command:
```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu129```
This is the command to install pytorch nightly instead which might have performance improvements.
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu129```
#### Troubleshooting
If you get the "Torch not compiled with CUDA enabled" error, uninstall torch with:
```pip uninstall torch```
And install it again with the command above.
### Dependencies
Install the dependencies by opening your terminal inside the ComfyUI folder and:
```pip install -r requirements.txt```
After this you should have everything installed and can proceed to running ComfyUI.
### Others:
#### Apple Mac silicon
You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
1. Install pytorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest pytorch nightly).
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux.
1. Install the ComfyUI [dependencies](#dependencies). If you have another Stable Diffusion UI [you might be able to reuse the dependencies](#i-already-have-another-ui-for-stable-diffusion-installed-do-i-really-have-to-install-all-of-these-dependencies).
1. Launch ComfyUI by running `python main.py`
> **Note**: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in [ComfyUI manual installation](#manual-install-windows-linux).
#### DirectML (AMD Cards on Windows)
This is very badly supported and is not recommended. There are some unofficial builds of pytorch ROCm on windows that exist that will give you a much better experience than this. This readme will be updated once official pytorch ROCm builds for windows come out.
```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```
#### Ascend NPUs
For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the [installation](https://ascend.github.io/docs/sources/ascend/quick_install.html) page. Here's a step-by-step guide tailored to your platform and installation method:
1. Begin by installing the recommended or newer kernel version for Linux as specified in the Installation page of torch-npu, if necessary.
2. Proceed with the installation of Ascend Basekit, which includes the driver, firmware, and CANN, following the instructions provided for your specific platform.
3. Next, install the necessary packages for torch-npu by adhering to the platform-specific instructions on the [Installation](https://ascend.github.io/docs/sources/pytorch/install.html#pytorch) page.
4. Finally, adhere to the [ComfyUI manual installation](#manual-install-windows-linux) guide for Linux. Once all components are installed, you can run ComfyUI as described earlier.
#### Cambricon MLUs
For models compatible with Cambricon Extension for PyTorch (torch_mlu). Here's a step-by-step guide tailored to your platform and installation method:
1. Install the Cambricon CNToolkit by adhering to the platform-specific instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cntoolkit_3.7.2/cntoolkit_install_3.7.2/index.html)
2. Next, install the PyTorch(torch_mlu) following the instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cambricon_pytorch_1.17.0/user_guide_1.9/index.html)
3. Launch ComfyUI by running `python main.py`
#### Iluvatar Corex
For models compatible with Iluvatar Extension for PyTorch. Here's a step-by-step guide tailored to your platform and installation method:
1. Install the Iluvatar Corex Toolkit by adhering to the platform-specific instructions on the [Installation](https://support.iluvatar.com/#/DocumentCentre?id=1&nameCenter=2&productId=520117912052801536)
2. Launch ComfyUI by running `python main.py`
# Running
```python main.py```
### For AMD cards not officially supported by ROCm
Try running it with this command if you have issues:
For 6700, 6600 and maybe other RDNA2 or older: ```HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py```
For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py```
### AMD ROCm Tips
You can enable experimental memory efficient attention on recent pytorch in ComfyUI on some AMD GPUs using this command, it should already be enabled by default on RDNA3. If this improves speed for you on latest pytorch on your GPU please report it so that I can enable it by default.
```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```
You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.
# Notes
Only parts of the graph that have an output with all the correct inputs will be executed.
Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed.
Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it.
You can use () to change emphasis of a word or phrase like: (good code:1.2) or (bad code:0.8). The default emphasis for () is 1.1. To use () characters in your actual prompt escape them like \\( or \\).
You can use {day|night}, for wildcard/dynamic prompts. With this syntax "{wild|card|test}" will be randomly replaced by either "wild", "card" or "test" by the frontend every time you queue the prompt. To use {} characters in your actual prompt escape them like: \\{ or \\}.
Dynamic prompts also support C-style comments, like `// comment` or `/* comment */`.
To use a textual inversion concepts/embeddings in a text prompt put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):
```embedding:embedding_filename.pt```
## How to show high-quality previews?
Use ```--preview-method auto``` to enable previews.
The default installation includes a fast latent preview method that's low-resolution. To enable higher-quality previews with [TAESD](https://github.com/madebyollin/taesd), download the [taesd_decoder.pth, taesdxl_decoder.pth, taesd3_decoder.pth and taef1_decoder.pth](https://github.com/madebyollin/taesd/) and place them in the `models/vae_approx` folder. Once they're installed, restart ComfyUI and launch it with `--preview-method taesd` to enable high-quality previews.
## How to use TLS/SSL?
Generate a self-signed certificate (not appropriate for shared/production use) and key by running the command: `openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"`
Use `--tls-keyfile key.pem --tls-certfile cert.pem` to enable TLS/SSL, the app will now be accessible with `https://...` instead of `http://...`.
> Note: Windows users can use [alexisrolland/docker-openssl](https://github.com/alexisrolland/docker-openssl) or one of the [3rd party binary distributions](https://wiki.openssl.org/index.php/Binaries) to run the command example above.
<br/><br/>If you use a container, note that the volume mount `-v` can be a relative path so `... -v ".\:/openssl-certs" ...` would create the key & cert files in the current directory of your command prompt or powershell terminal.
## Support and dev channel
[Discord](https://comfy.org/discord): Try the #help or #feedback channels.
[Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like discord but open source).
See also: [https://www.comfy.org/](https://www.comfy.org/)
## Frontend Development
As of August 15, 2024, we have transitioned to a new frontend, which is now hosted in a separate repository: [ComfyUI Frontend](https://github.com/Comfy-Org/ComfyUI_frontend). This repository now hosts the compiled JS (from TS/Vue) under the `web/` directory.
### Reporting Issues and Requesting Features
For any bugs, issues, or feature requests related to the frontend, please use the [ComfyUI Frontend repository](https://github.com/Comfy-Org/ComfyUI_frontend). This will help us manage and address frontend-specific concerns more efficiently.
### Using the Latest Frontend
The new frontend is now the default for ComfyUI. However, please note:
1. The frontend in the main ComfyUI repository is updated fortnightly.
2. Daily releases are available in the separate frontend repository.
To use the most up-to-date frontend version:
1. For the latest daily release, launch ComfyUI with this command line argument:
```
--front-end-version Comfy-Org/ComfyUI_frontend@latest
```
2. For a specific version, replace `latest` with the desired version number:
```
--front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
```
This approach allows you to easily switch between the stable fortnightly release and the cutting-edge daily updates, or even specific versions for testing purposes.
### Accessing the Legacy Frontend
If you need to use the legacy frontend for any reason, you can access it using the following command line argument:
```
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
```
This will use a snapshot of the legacy frontend preserved in the [ComfyUI Legacy Frontend repository](https://github.com/Comfy-Org/ComfyUI_legacy_frontend).
# QA
### Which GPU should I buy for this?
[See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)
|
mradermacher/SurMuy_v1_512512201-GGUF
|
mradermacher
| 2025-09-12T12:28:31Z | 1,010 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:AingHongsin/SurMuy_v1_512512201",
"base_model:quantized:AingHongsin/SurMuy_v1_512512201",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T16:40:49Z |
---
base_model: AingHongsin/SurMuy_v1_512512201
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AingHongsin/SurMuy_v1_512512201
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SurMuy_v1_512512201-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SurMuy_v1_512512201-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q3_K_S.gguf) | Q3_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q5_K_S.gguf) | Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q5_K_M.gguf) | Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SurMuy_v1_512512201-GGUF/resolve/main/SurMuy_v1_512512201.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
p2g7gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_slow_clam
|
p2g7gensyn
| 2025-09-12T12:28:13Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid slow clam",
"trl",
"genrl-swarm",
"I am rabid_slow_clam",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-20T16:40:26Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_slow_clam
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid slow clam
- trl
- genrl-swarm
- I am rabid_slow_clam
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_slow_clam
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="p2g7gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_slow_clam", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Krust081/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla
|
Krust081
| 2025-09-12T12:28:10Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am elusive_territorial_chinchilla",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T00:24:58Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am elusive_territorial_chinchilla
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NamoNam/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster
|
NamoNam
| 2025-09-12T12:28:07Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am giant skittish hamster",
"trl",
"genrl-swarm",
"I am giant_skittish_hamster",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-20T16:41:36Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am giant skittish hamster
- trl
- genrl-swarm
- I am giant_skittish_hamster
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NamoNam/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poltextlab/xlm-roberta-large-pooled-emotions10-v2
|
poltextlab
| 2025-09-12T12:27:48Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"pytorch",
"en",
"hu",
"fr",
"cs",
"sk",
"pl",
"de",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-11T14:50:50Z |
---
model-index:
- name: poltextlab/xlm-roberta-large-pooled-emotions10-v2
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 81%
- name: F1-Score
type: f1
value: 81%
tags:
- text-classification
- pytorch
metrics:
- precision
- recall
- f1-score
language:
- en
- hu
- fr
- cs
- sk
- pl
- de
base_model:
- xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: cc-by-4.0
extra_gated_prompt: Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-pooled-emotions10-v2
An `xlm-roberta-large` model finetuned on sentence-level multilingual training data hand-annotated using the following labels:
- **0**: "Neutral"
- **1**: "Anger"
- **2**: "Fear"
- **3**: "Disgust"
- **4**: "Sadness"
- **5**: "Joy"
- **6**: "Hope"
- **7**: "Enthusiasm"
- **8**: "Pride"
- **9**: "Other emotion"
The training data we used was augmented translated texts. It covers 7 languages (English, German, French, Polish, Slovak, Czech and Hungarian) with nearly identical shares.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-emotions10-v2",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
# Classification Report
## Overall Performance:
* **Accuracy:** 81%
* **Macro Avg:** Precision: 0.82, Recall: 0.81, F1-score: 0.81
* **Weighted Avg:** Precision: 0.81, Recall: 0.81, F1-score: 0.81
## Per-Class Metrics:
| Label | Precision | Recall | F1-score | Support |
|:-----------------|----------:|-------:|---------:|--------:|
| Neutral (0) | 0.81 | 0.88 | 0.85 | 9367 |
| Anger (1) | 0.73 | 0.70 | 0.72 | 5433 |
| Fear (2) | 0.86 | 0.84 | 0.85 | 5434 |
| Disgust (3) | 0.95 | 0.95 | 0.95 | 5437 |
| Sadness (4) | 0.90 | 0.85 | 0.88 | 5434 |
| Joy (5) | 0.84 | 0.85 | 0.85 | 5162 |
| Hope (6) | 0.59 | 0.63 | 0.61 | 5437 |
| Enthusiasm (7) | 0.70 | 0.63 | 0.67 | 5433 |
| Pride (8) | 0.82 | 0.82 | 0.82 | 5435 |
| Other emotion (9)| 0.98 | 0.95 | 0.97 | 2051 |
Total samples: **54,623**
## Inference platform
This model is used by the [Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
|
Rabot44/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_bipedal_spider
|
Rabot44
| 2025-09-12T12:26:50Z | 146 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pesty_bipedal_spider",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T17:17:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pesty_bipedal_spider
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757679912
|
stonermay
| 2025-09-12T12:26:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T12:26:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Spider-man1/blockassist
|
Spider-man1
| 2025-09-12T12:25:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peckish durable toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T17:29:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peckish durable toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
|
leonmullerrr
| 2025-09-12T12:25:47Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am coiled wild mouse",
"trl",
"genrl-swarm",
"I am coiled_wild_mouse",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-04T13:50:15Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am coiled wild mouse
- trl
- genrl-swarm
- I am coiled_wild_mouse
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
p2g5dolph3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino
|
p2g5dolph3
| 2025-09-12T12:25:29Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am peckish ferocious rhino",
"trl",
"genrl-swarm",
"I am peckish_ferocious_rhino",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-17T21:31:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am peckish ferocious rhino
- trl
- genrl-swarm
- I am peckish_ferocious_rhino
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="p2g5dolph3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.