modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
chittickisaias/blockassist-bc-fishy_meek_baboon_1757561423
|
chittickisaias
| 2025-09-11T03:30:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy meek baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:30:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy meek baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mccomasadxdwu/blockassist-bc-dense_lithe_chinchilla_1757561423
|
mccomasadxdwu
| 2025-09-11T03:30:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dense lithe chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:30:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dense lithe chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nanonamosgro/blockassist-bc-snorting_roaring_mink_1757561390
|
nanonamosgro
| 2025-09-11T03:30:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting roaring mink",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:30:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting roaring mink
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trungpq/rlcc-new-taste-upsample_replacement-absa-None
|
trungpq
| 2025-09-11T03:29:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:37:21Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-taste-upsample_replacement-absa-None
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-taste-upsample_replacement-absa-None
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1128
- Accuracy: 0.47
- F1 Macro: 0.5513
- Precision Macro: 0.5679
- Recall Macro: 0.5646
- Total Tf: [188, 212, 988, 212]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 46
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------------------:|
| 1.1037 | 1.0 | 47 | 1.1011 | 0.415 | 0.4777 | 0.5341 | 0.5256 | [166, 234, 966, 234] |
| 1.0734 | 2.0 | 94 | 1.1209 | 0.3825 | 0.4892 | 0.4953 | 0.4919 | [153, 247, 953, 247] |
| 0.9888 | 3.0 | 141 | 1.1190 | 0.425 | 0.5106 | 0.5243 | 0.5237 | [170, 230, 970, 230] |
| 0.8303 | 4.0 | 188 | 1.1649 | 0.465 | 0.5614 | 0.5719 | 0.5610 | [186, 214, 986, 214] |
| 0.6665 | 5.0 | 235 | 1.2158 | 0.465 | 0.5535 | 0.5616 | 0.5586 | [186, 214, 986, 214] |
| 0.5467 | 6.0 | 282 | 1.3128 | 0.46 | 0.5498 | 0.5613 | 0.5557 | [184, 216, 984, 216] |
| 0.3923 | 7.0 | 329 | 1.4469 | 0.45 | 0.5385 | 0.5591 | 0.5475 | [180, 220, 980, 220] |
| 0.3473 | 8.0 | 376 | 1.5892 | 0.45 | 0.5323 | 0.5473 | 0.5467 | [180, 220, 980, 220] |
| 0.2827 | 9.0 | 423 | 1.5845 | 0.4925 | 0.5832 | 0.5894 | 0.5838 | [197, 203, 997, 203] |
| 0.2261 | 10.0 | 470 | 1.7583 | 0.46 | 0.5515 | 0.5778 | 0.5589 | [184, 216, 984, 216] |
| 0.1761 | 11.0 | 517 | 1.7586 | 0.4975 | 0.5813 | 0.5835 | 0.5859 | [199, 201, 999, 201] |
| 0.1424 | 12.0 | 564 | 1.8290 | 0.485 | 0.5715 | 0.5793 | 0.5763 | [194, 206, 994, 206] |
| 0.1146 | 13.0 | 611 | 1.9360 | 0.4875 | 0.5714 | 0.5882 | 0.5786 | [195, 205, 995, 205] |
| 0.0923 | 14.0 | 658 | 2.1128 | 0.47 | 0.5513 | 0.5679 | 0.5646 | [188, 212, 988, 212] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
trungpq/rlcc-new-appearance-class-weight-absa-avg
|
trungpq
| 2025-09-11T03:29:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:36:38Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-appearance-class-weight-absa-avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-appearance-class-weight-absa-avg
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4329
- Accuracy: 0.6890
- F1 Macro: 0.6441
- Precision Macro: 0.6636
- Recall Macro: 0.6396
- Total Tf: [288, 130, 1124, 130]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 34
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:---------------------:|
| 1.1047 | 1.0 | 35 | 1.0955 | 0.6077 | 0.4414 | 0.4612 | 0.5129 | [254, 164, 1090, 164] |
| 1.1218 | 2.0 | 70 | 1.1048 | 0.6029 | 0.3930 | 0.3502 | 0.5 | [252, 166, 1088, 166] |
| 1.0841 | 3.0 | 105 | 1.1170 | 0.6029 | 0.4043 | 0.5381 | 0.5015 | [252, 166, 1088, 166] |
| 0.9814 | 4.0 | 140 | 1.0832 | 0.6196 | 0.4669 | 0.7138 | 0.5287 | [259, 159, 1095, 159] |
| 0.8862 | 5.0 | 175 | 1.0824 | 0.6244 | 0.5317 | 0.5683 | 0.5476 | [261, 157, 1097, 157] |
| 0.7938 | 6.0 | 210 | 1.0658 | 0.6507 | 0.5722 | 0.6089 | 0.5899 | [272, 146, 1108, 146] |
| 0.6662 | 7.0 | 245 | 1.1087 | 0.6555 | 0.5998 | 0.6233 | 0.5967 | [274, 144, 1110, 144] |
| 0.5241 | 8.0 | 280 | 1.1202 | 0.6507 | 0.6009 | 0.6109 | 0.6013 | [272, 146, 1108, 146] |
| 0.4825 | 9.0 | 315 | 1.2163 | 0.6675 | 0.5937 | 0.6619 | 0.5988 | [279, 139, 1115, 139] |
| 0.4072 | 10.0 | 350 | 1.1358 | 0.6938 | 0.6519 | 0.6679 | 0.6464 | [290, 128, 1126, 128] |
| 0.3274 | 11.0 | 385 | 1.2639 | 0.6746 | 0.6219 | 0.6747 | 0.6170 | [282, 136, 1118, 136] |
| 0.2657 | 12.0 | 420 | 1.2169 | 0.7057 | 0.6691 | 0.6802 | 0.6645 | [295, 123, 1131, 123] |
| 0.2353 | 13.0 | 455 | 1.3294 | 0.6842 | 0.6387 | 0.6598 | 0.6343 | [286, 132, 1122, 132] |
| 0.1644 | 14.0 | 490 | 1.4121 | 0.6627 | 0.6021 | 0.6550 | 0.6004 | [277, 141, 1113, 141] |
| 0.1799 | 15.0 | 525 | 1.4001 | 0.6722 | 0.6222 | 0.6489 | 0.6177 | [281, 137, 1117, 137] |
| 0.1435 | 16.0 | 560 | 1.4439 | 0.6818 | 0.6321 | 0.6658 | 0.6298 | [285, 133, 1121, 133] |
| 0.1515 | 17.0 | 595 | 1.4329 | 0.6890 | 0.6441 | 0.6636 | 0.6396 | [288, 130, 1124, 130] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
ferdinangurakuqije/blockassist-bc-pensive_prickly_baboon_1757561329
|
ferdinangurakuqije
| 2025-09-11T03:29:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive prickly baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:28:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive prickly baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zaimkibriya7859/blockassist-bc-exotic_soaring_beaver_1757561317
|
zaimkibriya7859
| 2025-09-11T03:28:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic soaring beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:28:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic soaring beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
burhansjohnny/blockassist-bc-dappled_raging_yak_1757561273
|
burhansjohnny
| 2025-09-11T03:28:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled raging yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:28:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled raging yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
toruns/blockassist-bc-insectivorous_bold_lion_1757561253
|
toruns
| 2025-09-11T03:27:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:27:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iyaadshikder1546/blockassist-bc-pensive_agile_bee_1757561267
|
iyaadshikder1546
| 2025-09-11T03:27:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive agile bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:27:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive agile bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cintroncdgkq/blockassist-bc-monstrous_whistling_dinosaur_1757561238
|
cintroncdgkq
| 2025-09-11T03:27:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous whistling dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:27:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous whistling dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
misaeluoyz/blockassist-bc-bipedal_soaring_porcupine_1757561214
|
misaeluoyz
| 2025-09-11T03:27:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reptilian bellowing crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:26:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reptilian bellowing crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raileshikder7241/blockassist-bc-slender_amphibious_cheetah_1757561182
|
raileshikder7241
| 2025-09-11T03:26:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender amphibious cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:26:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender amphibious cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
baqueginny/blockassist-bc-scruffy_screeching_magpie_1757561179
|
baqueginny
| 2025-09-11T03:26:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender amphibious cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:26:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender amphibious cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jalkafariya/blockassist-bc-stealthy_hoarse_toucan_1757561156
|
jalkafariya
| 2025-09-11T03:26:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy hoarse toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:26:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy hoarse toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ycseihhtdtcihtdyguguh/blockassist-bc-tough_tricky_eel_1757561155
|
ycseihhtdtcihtdyguguh
| 2025-09-11T03:26:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy hoarse toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:26:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy hoarse toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/LLM360.K2-Think-GGUF
|
DevQuasar
| 2025-09-11T03:25:57Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:LLM360/K2-Think",
"base_model:quantized:LLM360/K2-Think",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-11T00:46:43Z |
---
base_model:
- LLM360/K2-Think
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [LLM360/K2-Think](https://huggingface.co/LLM360/K2-Think)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
quiroshedge/blockassist-bc-stinging_purring_ape_1757561130
|
quiroshedge
| 2025-09-11T03:25:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily squeaky mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:25:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily squeaky mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jrfszy/blockassist-bc-barky_wary_sandpiper_1757561107
|
jrfszy
| 2025-09-11T03:25:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky wary sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:25:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky wary sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vullnetbogdaniy81/blockassist-bc-soft_curious_duck_1757561100
|
vullnetbogdaniy81
| 2025-09-11T03:25:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft curious duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:25:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft curious duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF
|
asdfasdasda
| 2025-09-11T03:24:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-09-11T03:24:15Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: Qwen/Qwen2.5-VL-3B-Instruct
---
# asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-VL-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo asdfasdasda/Qwen2.5-VL-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-3b-instruct-q8_0.gguf -c 2048
```
|
oyshimimi50/blockassist-bc-alert_colorful_pigeon_1757561052
|
oyshimimi50
| 2025-09-11T03:24:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert colorful pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:24:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert colorful pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
babs/musan-distilhubert-classifier
|
babs
| 2025-09-11T03:24:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-09-11T03:24:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andidedjag513/blockassist-bc-monstrous_subtle_kingfisher_1757561038
|
andidedjag513
| 2025-09-11T03:24:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous subtle kingfisher",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:24:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous subtle kingfisher
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slatinlatrina/blockassist-bc-mammalian_sneaky_prawn_1757561011
|
slatinlatrina
| 2025-09-11T03:23:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian sneaky prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:23:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian sneaky prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harmonyblevinsm0/blockassist-bc-silent_miniature_monkey_1757560930
|
harmonyblevinsm0
| 2025-09-11T03:23:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent miniature monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:23:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent miniature monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nitishgulati/llama-fitness-finetuned
|
nitishgulati
| 2025-09-11T03:22:53Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"text-generation",
"base_model:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] |
text-generation
| 2025-09-11T03:22:44Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: llama-fitness-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-fitness-finetuned
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.6154 | 6.8966 | 200 | 1.7424 |
| 0.1587 | 13.7931 | 400 | 2.2105 |
| 0.0977 | 20.6897 | 600 | 2.3838 |
| 0.0776 | 27.5862 | 800 | 2.4887 |
| 0.0636 | 34.4828 | 1000 | 2.8994 |
| 0.0614 | 41.3793 | 1200 | 2.7971 |
| 0.0574 | 48.2759 | 1400 | 2.9054 |
| 0.0562 | 55.1724 | 1600 | 3.0105 |
| 0.055 | 62.0690 | 1800 | 3.0397 |
| 0.0538 | 68.9655 | 2000 | 3.0823 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
sumanthbvss/MLModels
|
sumanthbvss
| 2025-09-11T03:22:38Z | 47 | 0 | null |
[
"tflite",
"region:us"
] | null | 2025-07-05T09:37:44Z |
# ML Models Directory
Place the following TFLite models in this directory:
1. `gemma.tflite` - Gemma 2B model for text conversations
2. `fastspeech2.tflite` - FastSpeech 2 model for speech synthesis
3. `mobilenet_v2.tflite` - MobileNetV2 model for visual processing
## Model Specifications
### Gemma 2B
- Input: Text tokens (max length 512)
- Output: Text generation
- Size: ~2GB
### FastSpeech 2
- Input: Text sequence
- Output: Mel spectrogram for voice synthesis
- Features: Male/female voice options
### MobileNetV2
- Input: 224x224 RGB image
- Output: 1000-class classification
- Features: Optimized for mobile devices
|
OPPOer/Qwen-Image-Pruning
|
OPPOer
| 2025-09-11T03:22:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"zh",
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"license:apache-2.0",
"diffusers:QwenImagePipeline",
"region:us"
] |
text-to-image
| 2025-09-09T11:02:16Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image
language:
- en
- zh
library_name: diffusers
pipeline_tag: text-to-image
---
<div align="center">
<h1>Qwen-Image-Pruning</h1>
<a href='https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning'><img src="https://img.shields.io/badge/GitHub-OPPOer-blue.svg?logo=github" alt="GitHub"></a>
</div>
## Introduction
This open-source project is based on Qwen-Image and has attempted model pruning, removing 20 layers while retaining the weights of 40 layers, resulting in a model size of 13.3B parameters. The pruned model has experienced a slight drop in objective metrics. The pruned version will continue to be iterated upon. Additionally, the pruned version supports the adaptation and loading of community models such as LoRA and ControlNet. Please stay tuned. For the relevant inference scripts, please refer to https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning.
<div align="center">
<img src="bench.png">
</div>
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1757559380
|
lisaozill03
| 2025-09-11T03:22:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:22:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist
|
0xGareeb
| 2025-09-11T03:22:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:02:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
homerrice918/blockassist-bc-leaping_opaque_fox_1757560908
|
homerrice918
| 2025-09-11T03:22:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping opaque fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:22:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping opaque fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1757560900
|
omerbektass
| 2025-09-11T03:21:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:21:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shikderazriel6453/blockassist-bc-burrowing_thorny_gibbon_1757560897
|
shikderazriel6453
| 2025-09-11T03:21:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:21:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jemijorna596/blockassist-bc-reclusive_monstrous_pig_1757560884
|
jemijorna596
| 2025-09-11T03:21:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive monstrous pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:21:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive monstrous pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huseyinatahaninan/agenttuning_v4_15k_tag4-SFT-Qwen3-8B
|
huseyinatahaninan
| 2025-09-11T03:20:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T03:05:37Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: agenttuning_v4_15k_tag4-SFT-Qwen3-8B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_v4_15k_tag4-SFT-Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the agenttuning_v4_15k_tag4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nyu-dice-lab/VeriThoughts-Reasoning-14B-Qwen3
|
nyu-dice-lab
| 2025-09-11T03:20:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T23:02:32Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-14B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: VeriThoughts-Reasoning-14B-Qwen3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VeriThoughts-Reasoning-14B-Qwen3
This model is a fine-tuned version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) on the reasoning_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
trungpq/rlcc-new-palate-upsample_replacement-absa-None
|
trungpq
| 2025-09-11T03:20:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:37:15Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-palate-upsample_replacement-absa-None
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-palate-upsample_replacement-absa-None
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5296
- Accuracy: 0.7125
- F1 Macro: 0.4872
- Precision Macro: 0.5078
- Recall Macro: 0.5020
- Total Tf: [290, 117, 1104, 117]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 21
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:---------------------:|
| 1.1436 | 1.0 | 22 | 1.0968 | 0.7420 | 0.4912 | 0.4546 | 0.5464 | [302, 105, 1116, 105] |
| 1.0851 | 2.0 | 44 | 1.1121 | 0.7174 | 0.4517 | 0.4552 | 0.5112 | [292, 115, 1106, 115] |
| 0.9986 | 3.0 | 66 | 1.1304 | 0.7248 | 0.5044 | 0.5411 | 0.5351 | [295, 112, 1109, 112] |
| 0.8974 | 4.0 | 88 | 1.1597 | 0.7346 | 0.5352 | 0.5595 | 0.5511 | [299, 108, 1113, 108] |
| 0.8154 | 5.0 | 110 | 1.1627 | 0.7297 | 0.5322 | 0.5420 | 0.5406 | [297, 110, 1111, 110] |
| 0.703 | 6.0 | 132 | 1.2983 | 0.7322 | 0.5293 | 0.5385 | 0.5423 | [298, 109, 1112, 109] |
| 0.5548 | 7.0 | 154 | 1.3239 | 0.7101 | 0.4950 | 0.5042 | 0.5010 | [289, 118, 1103, 118] |
| 0.4741 | 8.0 | 176 | 1.4770 | 0.7273 | 0.5113 | 0.5389 | 0.5320 | [296, 111, 1110, 111] |
| 0.366 | 9.0 | 198 | 1.5296 | 0.7125 | 0.4872 | 0.5078 | 0.5020 | [290, 117, 1104, 117] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
adelactbeel/blockassist-bc-stinky_humming_alligator_1757560780
|
adelactbeel
| 2025-09-11T03:19:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky humming alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:19:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky humming alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
makhiovrnl/blockassist-bc-marine_armored_weasel_1757560756
|
makhiovrnl
| 2025-09-11T03:19:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine armored weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:19:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine armored weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/MiroThinker-32B-SFT-v0.2-GGUF
|
mradermacher
| 2025-09-11T03:19:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"agent",
"open-source",
"miromind",
"en",
"base_model:miromind-ai/MiroThinker-32B-SFT-v0.2",
"base_model:quantized:miromind-ai/MiroThinker-32B-SFT-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T02:25:59Z |
---
base_model: miromind-ai/MiroThinker-32B-SFT-v0.2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- agent
- open-source
- miromind
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/miromind-ai/MiroThinker-32B-SFT-v0.2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MiroThinker-32B-SFT-v0.2-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-32B-SFT-v0.2-GGUF/resolve/main/MiroThinker-32B-SFT-v0.2.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jrfszy/blockassist-bc-barky_wary_sandpiper_1757560734
|
jrfszy
| 2025-09-11T03:19:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky wary sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:18:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky wary sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trungpq/rlcc-new-aroma-upsample_replacement-absa-max
|
trungpq
| 2025-09-11T03:18:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:38:05Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-aroma-upsample_replacement-absa-max
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-aroma-upsample_replacement-absa-max
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0704
- Accuracy: 0.6612
- F1 Macro: 0.5333
- Precision Macro: 0.5807
- Recall Macro: 0.5446
- Total Tf: [283, 145, 1139, 145]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:---------------------:|
| 1.1278 | 1.0 | 41 | 1.1693 | 0.5514 | 0.4111 | 0.3820 | 0.4894 | [236, 192, 1092, 192] |
| 1.0553 | 2.0 | 82 | 1.1281 | 0.6332 | 0.4715 | 0.5306 | 0.5056 | [271, 157, 1127, 157] |
| 0.7136 | 3.0 | 123 | 1.0999 | 0.6589 | 0.5540 | 0.5610 | 0.5553 | [282, 146, 1138, 146] |
| 0.5505 | 4.0 | 164 | 1.2349 | 0.6706 | 0.5738 | 0.5756 | 0.5769 | [287, 141, 1143, 141] |
| 0.3836 | 5.0 | 205 | 1.4542 | 0.6449 | 0.5363 | 0.5463 | 0.5439 | [276, 152, 1132, 152] |
| 0.2161 | 6.0 | 246 | 1.5904 | 0.6729 | 0.5483 | 0.6033 | 0.5557 | [288, 140, 1144, 140] |
| 0.1925 | 7.0 | 287 | 1.7096 | 0.6729 | 0.5319 | 0.6058 | 0.5465 | [288, 140, 1144, 140] |
| 0.1986 | 8.0 | 328 | 1.8097 | 0.6659 | 0.5672 | 0.5726 | 0.5663 | [285, 143, 1141, 143] |
| 0.1202 | 9.0 | 369 | 2.0704 | 0.6612 | 0.5333 | 0.5807 | 0.5446 | [283, 145, 1139, 145] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfeshsuk03hzx0n06a18sohg_2
|
BootesVoid
| 2025-09-11T03:18:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-11T03:18:10Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SUMISA
---
# Cmferuoiz03Hpx0N09Wutcrgf_Cmfeshsuk03Hzx0N06A18Sohg_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SUMISA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SUMISA",
"lora_weights": "https://huggingface.co/BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfeshsuk03hzx0n06a18sohg_2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfeshsuk03hzx0n06a18sohg_2', weight_name='lora.safetensors')
image = pipeline('SUMISA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfeshsuk03hzx0n06a18sohg_2/discussions) to add images that show off what you’ve made with this LoRA.
|
navneetthakor/LLFG-3
|
navneetthakor
| 2025-09-11T03:17:40Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-14T11:37:23Z |
---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** navneetthakor
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
|
arzaanshikder7562/blockassist-bc-darting_sniffing_rhino_1757560617
|
arzaanshikder7562
| 2025-09-11T03:17:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting sniffing rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:17:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting sniffing rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abdoosh1000/flan-t5-autonomous-workspace
|
abdoosh1000
| 2025-09-11T03:16:53Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-02T04:42:37Z |
# FLAN-T5 Autonomous Training Workspace
This is a unified repository for autonomous FLAN-T5 model training operations.
## Structure
- `tracking/` - Training state and progress tracking files
- `models/` - Trained model checkpoints and metadata
- `datasets/` - Dataset processing state and chunk information
- `logs/` - Training logs and metrics
## Latest Status
Last updated: 2025-09-10T15:22:17.340834
Workspace created by: Autonomous FLAN-T5 Trainer
## Usage
This repository is automatically managed by the autonomous training system.
All training progress, model states, and dataset processing information is tracked here.
|
vendi11/blockassist-bc-placid_placid_llama_1757560561
|
vendi11
| 2025-09-11T03:16:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:16:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kinghamtruman/blockassist-bc-regal_docile_wildebeest_1757560580
|
kinghamtruman
| 2025-09-11T03:16:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal docile wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:16:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal docile wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cintroncdgkq/blockassist-bc-monstrous_whistling_dinosaur_1757560566
|
cintroncdgkq
| 2025-09-11T03:16:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous whistling dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:16:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous whistling dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jenniellama/task-14-microsoft-Phi-4-mini-instruct
|
jenniellama
| 2025-09-11T03:16:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-09-10T03:39:52Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
srwmilerwhitchurchvtak/blockassist-bc-endangered_knobby_jellyfish_1757560550
|
srwmilerwhitchurchvtak
| 2025-09-11T03:15:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered knobby jellyfish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:15:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered knobby jellyfish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1757560524
|
omerbkts
| 2025-09-11T03:15:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:15:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harnscindi/blockassist-bc-flapping_freckled_squid_1757560508
|
harnscindi
| 2025-09-11T03:15:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping freckled squid",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:15:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping freckled squid
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
borsahopa67/blockassist-bc-polished_quiet_badger_1757560504
|
borsahopa67
| 2025-09-11T03:15:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished quiet badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:15:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished quiet badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wolfeduodrw/blockassist-bc-graceful_hulking_lemur_1757560454
|
wolfeduodrw
| 2025-09-11T03:14:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous monstrous turkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:14:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous monstrous turkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
meekinsvyglkcedenoxyn/blockassist-bc-nocturnal_sneaky_porpoise_1757560430
|
meekinsvyglkcedenoxyn
| 2025-09-11T03:13:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal sneaky porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:13:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal sneaky porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rakancorle1/Qwen3-4B-Instruct_0910_LODO_shopping_admin_full
|
Rakancorle1
| 2025-09-11T03:13:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T02:17:29Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen3-4B-Instruct_0910_LODO_shopping_admin_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B-Instruct_0910_LODO_shopping_admin_full
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the Policy_Traj_LODO_shopping_admin dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
priyankajugwa/blockassist-bc-exotic_frisky_ostrich_1757560403
|
priyankajugwa
| 2025-09-11T03:13:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic frisky ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:13:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic frisky ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mcbridepollakdq/blockassist-bc-armored_cunning_armadillo_1757560406
|
mcbridepollakdq
| 2025-09-11T03:13:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic frisky ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:13:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic frisky ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helsearvalynnaseen/blockassist-bc-flapping_invisible_octopus_1757560370
|
helsearvalynnaseen
| 2025-09-11T03:13:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky humming alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:13:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky humming alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shikderabaan7986/blockassist-bc-shy_arctic_prawn_1757560358
|
shikderabaan7986
| 2025-09-11T03:12:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shy arctic prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:12:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shy arctic prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maukluchoda/blockassist-bc-placid_stinky_buffalo_1757560338
|
maukluchoda
| 2025-09-11T03:12:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid stinky buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:12:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid stinky buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trungpq/rlcc-new-appearance-upsample_replacement-absa-max
|
trungpq
| 2025-09-11T03:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:37:58Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-appearance-upsample_replacement-absa-max
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-appearance-upsample_replacement-absa-max
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6950
- Accuracy: 0.6340
- F1 Macro: 0.5522
- Precision Macro: 0.6107
- Recall Macro: 0.5627
- Total Tf: [265, 153, 1101, 153]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 44
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:---------------------:|
| 1.1211 | 1.0 | 45 | 1.1177 | 0.5359 | 0.3656 | 0.3252 | 0.5 | [224, 194, 1060, 194] |
| 1.1137 | 2.0 | 90 | 1.1100 | 0.5335 | 0.3706 | 0.3610 | 0.4970 | [223, 195, 1059, 195] |
| 0.9748 | 3.0 | 135 | 1.1131 | 0.6220 | 0.5222 | 0.5541 | 0.5492 | [260, 158, 1096, 158] |
| 0.7356 | 4.0 | 180 | 1.2100 | 0.6005 | 0.5350 | 0.5590 | 0.5661 | [251, 167, 1087, 167] |
| 0.6668 | 5.0 | 225 | 1.2673 | 0.6124 | 0.5507 | 0.5629 | 0.5569 | [256, 162, 1092, 162] |
| 0.4741 | 6.0 | 270 | 1.4287 | 0.6077 | 0.5256 | 0.5559 | 0.5372 | [254, 164, 1090, 164] |
| 0.43 | 7.0 | 315 | 1.5078 | 0.6172 | 0.5497 | 0.5736 | 0.5523 | [258, 160, 1094, 160] |
| 0.3213 | 8.0 | 360 | 1.6583 | 0.6364 | 0.5492 | 0.6162 | 0.5612 | [266, 152, 1102, 152] |
| 0.2496 | 9.0 | 405 | 1.6353 | 0.6364 | 0.5850 | 0.5959 | 0.5862 | [266, 152, 1102, 152] |
| 0.1908 | 10.0 | 450 | 1.8595 | 0.6364 | 0.5635 | 0.6157 | 0.5688 | [266, 152, 1102, 152] |
| 0.1383 | 11.0 | 495 | 2.0273 | 0.6292 | 0.5662 | 0.5924 | 0.5734 | [263, 155, 1099, 155] |
| 0.1125 | 12.0 | 540 | 2.0201 | 0.6555 | 0.6054 | 0.6258 | 0.6020 | [274, 144, 1110, 144] |
| 0.1046 | 13.0 | 585 | 2.3728 | 0.6411 | 0.5737 | 0.6257 | 0.5854 | [268, 150, 1104, 150] |
| 0.0897 | 14.0 | 630 | 2.4554 | 0.6459 | 0.5712 | 0.6292 | 0.5861 | [270, 148, 1106, 148] |
| 0.0518 | 15.0 | 675 | 2.2957 | 0.6531 | 0.5947 | 0.6288 | 0.5921 | [273, 145, 1109, 145] |
| 0.0587 | 16.0 | 720 | 2.4788 | 0.6411 | 0.5814 | 0.6075 | 0.5801 | [268, 150, 1104, 150] |
| 0.0445 | 17.0 | 765 | 2.6950 | 0.6340 | 0.5522 | 0.6107 | 0.5627 | [265, 153, 1101, 153] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
hendrydarrell/blockassist-bc-docile_dappled_whale_1757560314
|
hendrydarrell
| 2025-09-11T03:12:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"docile dappled whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:11:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- docile dappled whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
saujasv/gemma-hard-correctness-or-cost-ipo-random
|
saujasv
| 2025-09-11T03:11:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T03:08:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ganswiltzblack/blockassist-bc-nocturnal_humming_badger_1757560286
|
ganswiltzblack
| 2025-09-11T03:11:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal humming badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:11:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal humming badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
altonnannialton/blockassist-bc-robust_grunting_zebra_1757560240
|
altonnannialton
| 2025-09-11T03:11:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust grunting zebra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:10:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust grunting zebra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crabtreeftf/blockassist-bc-darting_mighty_panther_1757560248
|
crabtreeftf
| 2025-09-11T03:10:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting mighty panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:10:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting mighty panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
allfordedgar26/blockassist-bc-omnivorous_sprightly_aardvark_1757560226
|
allfordedgar26
| 2025-09-11T03:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous sprightly aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:10:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous sprightly aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
reyeslinnie223/blockassist-bc-lethal_darting_scorpion_1757560209
|
reyeslinnie223
| 2025-09-11T03:10:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal darting scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:10:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal darting scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757560146
|
vendi11
| 2025-09-11T03:09:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:09:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luckeciano/Qwen-2.5-7B-GRPO-Base-LR-1e-4-v2_2945
|
luckeciano
| 2025-09-11T03:09:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T21:46:48Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-LR-1e-4-v2_2945
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-LR-1e-4-v2_2945
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-LR-1e-4-v2_2945", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/sarwc5bg)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
caseboltvernie/blockassist-bc-quick_lazy_whale_1757560153
|
caseboltvernie
| 2025-09-11T03:09:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick lazy whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:09:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick lazy whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zaimkibriya7859/blockassist-bc-exotic_soaring_beaver_1757560136
|
zaimkibriya7859
| 2025-09-11T03:09:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic soaring beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:09:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic soaring beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lodestones/Chroma1-Radiance
|
lodestones
| 2025-09-11T03:08:52Z | 0 | 20 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T01:07:06Z |
---
license: apache-2.0
---
|
iekagrbaiya/blockassist-bc-clawed_rabid_fish_1757560108
|
iekagrbaiya
| 2025-09-11T03:08:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed rabid fish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:08:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed rabid fish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sensmeierbrenton/blockassist-bc-silky_solitary_boar_1757560094
|
sensmeierbrenton
| 2025-09-11T03:08:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky solitary boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:08:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky solitary boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jazmynikrr/blockassist-bc-dormant_hulking_eagle_1757560086
|
jazmynikrr
| 2025-09-11T03:08:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant hulking eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:08:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant hulking eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
heindelgadodjlemonddbu/blockassist-bc-cunning_untamed_cobra_1757560063
|
heindelgadodjlemonddbu
| 2025-09-11T03:07:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning untamed cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:07:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning untamed cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hamilsordar5647/blockassist-bc-chattering_hairy_woodpecker_1757560058
|
hamilsordar5647
| 2025-09-11T03:07:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning untamed cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning untamed cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757559621
|
cwayneconnor
| 2025-09-11T03:07:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:05:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raileshikder7241/blockassist-bc-slender_amphibious_cheetah_1757560030
|
raileshikder7241
| 2025-09-11T03:07:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender amphibious cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:07:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender amphibious cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
f9997413/blockassist-bc-snorting_arctic_flamingo_1757560006
|
f9997413
| 2025-09-11T03:06:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting arctic flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:06:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting arctic flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahumadaxhg/blockassist-bc-alert_spotted_dolphin_1757560006
|
ahumadaxhg
| 2025-09-11T03:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert spotted dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:06:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert spotted dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ayringdh/blockassist-bc-skittish_docile_impala_1757559984
|
ayringdh
| 2025-09-11T03:06:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skittish docile impala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:06:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skittish docile impala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeresftarke/blockassist-bc-flapping_beaked_owl_1757559975
|
jeresftarke
| 2025-09-11T03:06:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping beaked owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:06:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping beaked owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lodikeyekfeli/blockassist-bc-tame_coiled_porcupine_1757559878
|
lodikeyekfeli
| 2025-09-11T03:04:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame coiled porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:04:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame coiled porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Schrod1nger/distilbert-base-uncased-finetuned-emotion
|
Schrod1nger
| 2025-09-11T03:04:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-10T09:59:00Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2009
- Accuracy: 0.929
- F1: 0.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6495 | 1.0 | 250 | 0.2675 | 0.916 | 0.9158 |
| 0.2185 | 2.0 | 500 | 0.2009 | 0.929 | 0.9292 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1+cpu
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mauremilamlusa/blockassist-bc-lightfooted_hardy_jackal_1757559842
|
mauremilamlusa
| 2025-09-11T03:04:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted hardy jackal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:04:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted hardy jackal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1757558067
|
katanyasekolah
| 2025-09-11T03:03:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:03:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RahulBhattacharya/Rahuls_Text_Classification_Sentiment_Analysis
|
RahulBhattacharya
| 2025-09-11T03:03:31Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-31T22:41:20Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Rahuls_Text_Classification_Sentiment_Analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rahuls_Text_Classification_Sentiment_Analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7193
- Accuracy: 0.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
oekaltegabi/blockassist-bc-tame_dormant_hyena_1757559782
|
oekaltegabi
| 2025-09-11T03:03:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame dormant hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:03:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame dormant hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kornia/Efficient_LOFTR
|
kornia
| 2025-09-11T03:03:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T19:33:02Z |
---
license: apache-2.0
---
|
nonibovecoray/blockassist-bc-pale_leaping_kiwi_1757559768
|
nonibovecoray
| 2025-09-11T03:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pale leaping kiwi",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:02:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pale leaping kiwi
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
brisondey/blockassist-bc-insectivorous_energetic_koala_1757551348
|
brisondey
| 2025-09-11T00:42:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous energetic koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T00:42:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous energetic koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
damauoi/blockassist-bc-exotic_noisy_camel_1757551333
|
damauoi
| 2025-09-11T00:42:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic noisy camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T00:42:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic noisy camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
StephaneBah/whisper-small-rad-fr1.1
|
StephaneBah
| 2025-09-11T00:42:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"fr",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T17:16:54Z |
---
library_name: transformers
language:
- fr
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 'Whisper Small Fr - Radiologie1.1 '
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Fr - Radiologie1.1
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8172
- Wer: 34.6740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 6
- seed: 3407
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0594 | 31.25 | 500 | 0.7913 | 99.4516 |
| 0.0002 | 62.5 | 1000 | 0.7987 | 99.4516 |
| 0.0002 | 93.75 | 1500 | 0.8020 | 41.0116 |
| 0.0001 | 125.0 | 2000 | 0.8071 | 35.1005 |
| 0.0001 | 156.25 | 2500 | 0.8105 | 35.3443 |
| 0.0001 | 187.5 | 3000 | 0.8122 | 34.9787 |
| 0.0001 | 218.75 | 3500 | 0.8153 | 35.1615 |
| 0.0001 | 250.0 | 4000 | 0.8154 | 34.6130 |
| 0.0001 | 281.25 | 4500 | 0.8162 | 34.9787 |
| 0.0001 | 312.5 | 5000 | 0.8172 | 34.6740 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
poki1/blockassist-bc-lanky_carnivorous_slug_1757551348
|
poki1
| 2025-09-11T00:42:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T00:42:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky carnivorous slug
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eternis/eternis_router_encoder_sft_10Sep
|
eternis
| 2025-09-11T00:42:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T22:46:14Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
model-index:
- name: eternis_router_encoder_sft_10Sep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eternis_router_encoder_sft_10Sep
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6626
- Complexity Accuracy: 0.7917
- Model Accuracy: 0.7478
- Overall Accuracy: 0.5933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Complexity Accuracy | Model Accuracy | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:-------------------:|:--------------:|:----------------:|
| 0.7808 | 0.3429 | 300 | 0.7159 | 0.7452 | 0.7468 | 0.5585 |
| 0.731 | 0.6857 | 600 | 0.6979 | 0.7575 | 0.7468 | 0.568 |
| 0.7189 | 1.0286 | 900 | 0.6903 | 0.77 | 0.7468 | 0.5777 |
| 0.7168 | 1.3714 | 1200 | 0.6844 | 0.7665 | 0.7475 | 0.571 |
| 0.7033 | 1.7143 | 1500 | 0.6809 | 0.7735 | 0.7468 | 0.5763 |
| 0.7098 | 2.0571 | 1800 | 0.6830 | 0.7648 | 0.747 | 0.5723 |
| 0.6901 | 2.4 | 2100 | 0.6740 | 0.7742 | 0.747 | 0.5797 |
| 0.6815 | 2.7429 | 2400 | 0.6798 | 0.771 | 0.747 | 0.5757 |
| 0.6886 | 3.0857 | 2700 | 0.6745 | 0.78 | 0.747 | 0.583 |
| 0.6727 | 3.4286 | 3000 | 0.6749 | 0.7772 | 0.7478 | 0.5825 |
| 0.6901 | 3.7714 | 3300 | 0.6706 | 0.78 | 0.7462 | 0.583 |
| 0.6822 | 4.1143 | 3600 | 0.6702 | 0.7833 | 0.7472 | 0.5865 |
| 0.6737 | 4.4571 | 3900 | 0.6676 | 0.7825 | 0.7482 | 0.587 |
| 0.6568 | 4.8 | 4200 | 0.6707 | 0.7802 | 0.7478 | 0.5845 |
| 0.6655 | 5.1429 | 4500 | 0.6677 | 0.7855 | 0.7475 | 0.5893 |
| 0.6382 | 5.4857 | 4800 | 0.6678 | 0.7817 | 0.746 | 0.5845 |
| 0.654 | 5.8286 | 5100 | 0.6691 | 0.786 | 0.7475 | 0.588 |
| 0.6618 | 6.1714 | 5400 | 0.6652 | 0.782 | 0.748 | 0.5853 |
| 0.6607 | 6.5143 | 5700 | 0.6645 | 0.7875 | 0.7475 | 0.5897 |
| 0.6355 | 6.8571 | 6000 | 0.6628 | 0.787 | 0.748 | 0.589 |
| 0.6349 | 7.2 | 6300 | 0.6651 | 0.7887 | 0.7482 | 0.5907 |
| 0.6468 | 7.5429 | 6600 | 0.6630 | 0.7895 | 0.747 | 0.5897 |
| 0.6613 | 7.8857 | 6900 | 0.6612 | 0.79 | 0.748 | 0.5923 |
| 0.6338 | 8.2286 | 7200 | 0.6615 | 0.7883 | 0.7475 | 0.589 |
| 0.647 | 8.5714 | 7500 | 0.6635 | 0.7915 | 0.747 | 0.5917 |
| 0.633 | 8.9143 | 7800 | 0.6623 | 0.792 | 0.7472 | 0.5923 |
| 0.6683 | 9.2571 | 8100 | 0.6620 | 0.7907 | 0.7478 | 0.5915 |
| 0.6326 | 9.6 | 8400 | 0.6625 | 0.7923 | 0.7478 | 0.593 |
| 0.6249 | 9.9429 | 8700 | 0.6626 | 0.7917 | 0.7478 | 0.5933 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.