modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
2hpsatt/blockassist-bc-huge_deft_eagle_1756799257
|
2hpsatt
| 2025-09-02T07:48:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:48:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756799241
|
sekirr
| 2025-09-02T07:48:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:47:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756799011
|
bah63843
| 2025-09-02T07:44:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:44:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bugeun/MyGemmaNPC
|
bugeun
| 2025-09-02T07:43:16Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T04:21:51Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bugeun/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/natural-noob-xl-eps-anime-furry-general-v40-sdxl
|
John6666
| 2025-09-02T07:42:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"furry",
"anthro",
"aesthetic",
"color",
"knowledge",
"accuracy",
"details",
"creative",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-02T07:34:09Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- furry
- anthro
- aesthetic
- color
- knowledge
- accuracy
- details
- creative
- merge
- noobai
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v1.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1761682?modelVersionId=2173969).
This model created by [DarkFawkes](https://civitai.com/user/DarkFawkes).
|
mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF
|
mradermacher
| 2025-09-02T07:42:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"translation",
"en",
"base_model:tencent/Hunyuan-MT-Chimera-7B",
"base_model:quantized:tencent/Hunyuan-MT-Chimera-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
translation
| 2025-09-02T06:24:19Z |
---
base_model: tencent/Hunyuan-MT-Chimera-7B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- translation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hunyuan-MT-Chimera-7B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sekirr/blockassist-bc-masked_tenacious_whale_1756798823
|
sekirr
| 2025-09-02T07:41:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:41:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-32m-nq-rlhn-ft
|
RikiyaT
| 2025-09-02T07:39:34Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"license:mit",
"region:us"
] | null | 2025-09-02T04:44:28Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-32m-nq-rlhn-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-32m-pretrained`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-32m-nq-rlhn-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-32m-nq-rlhn-ft", trust_remote_code=True)
```
|
nightmedia/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-hi-mlx
|
nightmedia
| 2025-09-02T07:37:06Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"finetune",
"brainstorm 20x",
"brainstorm",
"optional thinking",
"creative",
"all use cases",
"text-generation",
"conversational",
"en",
"fr",
"zh",
"de",
"base_model:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"base_model:quantized:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-02T07:21:05Z |
---
license: apache-2.0
library_name: mlx
language:
- en
- fr
- zh
- de
tags:
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- qwen3
- finetune
- brainstorm 20x
- brainstorm
- optional thinking
- creative
- all use cases
- mlx
base_model: DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL
pipeline_tag: text-generation
---
# Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-hi-mlx
This model [Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-hi-mlx](https://huggingface.co/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-hi-mlx) was
converted to MLX format from [DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL](https://huggingface.co/DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL)
using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756798415
|
liukevin666
| 2025-09-02T07:36:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:34:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tencent/Hunyuan-7B-Instruct
|
tencent
| 2025-09-02T07:35:40Z | 2,507 | 73 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-7B-Pretrain",
"base_model:finetune:tencent/Hunyuan-7B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T03:40:59Z |
---
base_model:
- tencent/Hunyuan-7B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-7B-Instruct"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers.
```SHELL
pip install "transformers>=4.56.0"
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install "transformers>=4.56.0"
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756798405
|
xinnn32
| 2025-09-02T07:34:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:34:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/mocase-mix-hanekawa-mix-noobai-vpred-sdxl
|
John6666
| 2025-09-02T07:34:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"mocase style",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-02T07:29:15Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- mocase style
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/593414/mocasemix?modelVersionId=2169426).
This model created by [hanekawa1](https://civitai.com/user/hanekawa1).
|
Egor-N/blockassist-bc-vicious_stubby_bear_1756797105
|
Egor-N
| 2025-09-02T07:33:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious stubby bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:33:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious stubby bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756798346
|
sekirr
| 2025-09-02T07:33:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:33:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756798335
|
omerbektass
| 2025-09-02T07:32:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:32:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756798287
|
vendi11
| 2025-09-02T07:32:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:32:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Hunyuan-MT-Chimera-7B-GGUF
|
mradermacher
| 2025-09-02T07:30:45Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"translation",
"en",
"base_model:tencent/Hunyuan-MT-Chimera-7B",
"base_model:quantized:tencent/Hunyuan-MT-Chimera-7B",
"endpoints_compatible",
"region:us",
"conversational"
] |
translation
| 2025-09-02T04:36:38Z |
---
base_model: tencent/Hunyuan-MT-Chimera-7B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- translation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hunyuan-MT-Chimera-7B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q4_K_S.gguf) | Q4_K_S | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q6_K.gguf) | Q6_K | 6.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hunyuan-MT-Chimera-7B-GGUF/resolve/main/Hunyuan-MT-Chimera-7B.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bah63843/blockassist-bc-plump_fast_antelope_1756798106
|
bah63843
| 2025-09-02T07:29:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:29:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zaydzuhri/top-code-1.8B-4096-model
|
zaydzuhri
| 2025-09-02T07:25:41Z | 0 | 0 | null |
[
"safetensors",
"top_transformer",
"region:us"
] | null | 2025-09-02T07:14:58Z |
<div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
Welcome to 🔥 `flame`, a minimal and efficient framework built on `torchtitan` for training Flash Linear Attention (FLA) models (and more broadly, arbitrary autoregressive language models) with blazing efficiency.
**Feature Highlights:**
- 🚀 Minimal, easy-to-use, extensible training framework
- 🤗 Seamless integration with `fla` and `transformers`
- 🔄 Zero-cost data preprocessing: online tokenization, dataset shuffling, and multiple datasets support
- 🔮 4D parallelism (coming soon)
## Setup
To get started, clone the `flame` repository and install the required dependencies:
```bash
git clone https://github.com/fla-org/flame.git
cd flame
pip install .
```
`flame` manages minimal dependencies, only including `fla` and `torchtitan` as submodules.
After installation, initialize and update the submodules:
```sh
git submodule update --init --recursive
```
## Dataset Preparation
To download the dataset to your local disk, create a new Python file with the following content and execute it:
```py
from datasets import load_dataset
# load fineweb-edu with parallel processing
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="default", num_proc=64, cache_dir="/your/cache/path")
# or load a subset with roughly 100B tokens, suitable for small- or medium-sized experiments
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="sample-100BT", num_proc=64, cache_dir="/your/cache/path")
```
## Training Recipes
Here's an example of training a 340M FLA Transformer model with a LLaMA-like architecture from scratch on a 100BT subset of the Fineweb-edu corpus in streaming mode.
> [!WARNING]
> If the dataset is not downloaded beforehand, the streaming mode will attempt to fetch it from a remote server and download it on-the-fly, which can be highly unstable during training due to network issues.
> For stable training, ensure the dataset is downloaded locally (see [**Dataset Preparation**](#dataset-preparation)). Otherwise, we assume you are only testing the new corpus.
```sh
bash train.sh \
--job.config_file flame/models/fla.toml \
--job.dump_folder exp/transformer-340M-4K-10B/batch1.seqlen65536.context4096.warmup1024.update1.steps20480.lr3e-4.cosine \
--model.config configs/transformer_340M.json \
--model.tokenizer_path fla-hub/transformer-1.3B-100B \
--optimizer.name AdamW \
--optimizer.eps 1e-15 \
--optimizer.lr 3e-4 \
--lr_scheduler.warmup_steps 1024 \
--lr_scheduler.lr_min 0.1 \
--lr_scheduler.decay_type cosine \
--training.batch_size 1 \
--training.seq_len 65536 \
--training.context_len 4096 \
--training.varlen \
--training.gradient_accumulation_steps 1 \
--training.steps 20480 \
--training.max_norm 1.0 \
--training.skip_nan_inf \
--training.dataset HuggingFaceFW/fineweb-edu \
--training.dataset_name sample-100BT \
--training.dataset_split train \
--training.streaming \
--training.num_workers 32 \
--training.prefetch_factor 2 \
--training.seed 42 \
--training.compile \
--checkpoint.interval 2048 \
--checkpoint.load_step -1 \
--checkpoint.keep_latest_k 2 \
--metrics.log_freq 1
```
You can specify the number of GPUs by setting the environment variable `NGPU`, which defaults to 8.
**For single-GPU debugging, set `NGPU=1`.**
We provide several [config files](https://github.com/fla-org/flame/tree/main/configs) for different models.
By default, the learning rate is set to 3e-4 with a cosine scheduler. Other schedulers, such as WSD (wsd), are also supported.
**Key parameters:**
- `--lr_scheduler.decay_ratio`: The proportion of the steps allocated to the decay phase. The learning rate will remain stable after the warmup period and only start decaying during the last `decay_ratio` portion of the total training steps, which is known as the Warmup-Stable-Decay (WSD) schedule.
- `--lr_scheduler.warmup_steps`: The number of steps for the learning rate warmup phase.
- `--training.steps`: Total number of training steps.
- `--training.batch_size`: Batch size per device, must be 1 if `--training.varlen` is set.
- `--training.seq_len`: The length of each sequence in the batch, which is concatenated from multiple samples.
- `--training.context_len`: The max allowed length of a sample. For non-varlen mode, this is equivalent to `seq_len`.
- `--training.varlen`: Whether to conduct variable-length sequence training.
- `--training.gradient_accumulation_steps`: Number of gradient accumulation steps.
> [!WARNING]
> The total number of tokens processed per batch, referred to as `global_batch_size`, is calculated as batch_size × gradient_accumulation_steps × num_gpus.
> Each step processes `global_batch_size * seq_len` tokens.
> Monitor the value of `global_batch_size`, `warmup_steps`, and `steps` carefully when modifying any of the hyperparameters!
For a detailed explanation of all parameters, run:
```sh
bash train.sh -h
```
<details>
<summary>Usage</summary>
```py
options:
-h, --help show this help message and exit
--job.config_file JOB.CONFIG_FILE
Job config file
--job.dump_folder JOB.DUMP_FOLDER
Folder to dump job outputs
--job.description JOB.DESCRIPTION
Description of the job
--job.use_for_integration_test
Add this config to the integration test suite
--job.print_args Print the args to terminal
--model.config MODEL.CONFIG
Path to the model config
--model.norm_type MODEL.NORM_TYPE
Type of layer normalization to use [layernorm,
np_layernorm, rmsnorm, fused_rmsnorm]
--model.tokenizer_path MODEL.TOKENIZER_PATH
Tokenizer path
--profiling.enable_profiling
Whether to enable pytorch profiler
--profiling.save_traces_folder PROFILING.SAVE_TRACES_FOLDER
Trace files location
--profiling.profile_freq PROFILING.PROFILE_FREQ
How often to collect profiler traces, in iterations
--profiling.enable_memory_snapshot
Whether to dump memory snapshot
--profiling.save_memory_snapshot_folder PROFILING.SAVE_MEMORY_SNAPSHOT_FOLDER
Memeory snapshot files location
--optimizer.name OPTIMIZER.NAME
Optimizer to use
--optimizer.eps OPTIMIZER.EPS
Epsilon value for the optimizer.
--optimizer.fused Whether the fused implementation(CUDA only) is used.
--optimizer.scheduler {wsd,cosine,linear}
Scheduler to use. Currently supported: wsd, cosine,
and linear.
--optimizer.lr OPTIMIZER.LR
Learning rate to use
--optimizer.min_lr_ratio OPTIMIZER.MIN_LR_RATIO
Min lr ratio for lr scheduler
--optimizer.early_step_in_backward
Whether to apply optimizer in the backward. Caution,
optimizer_in_backward is not compatible with gradients
clipping, users should not call
register_post_accumulate_grad_hook after the optimizer
is built.
--training.batch_size TRAINING.BATCH_SIZE
Batch size
--training.seq_len TRAINING.SEQ_LEN
Sequence length
--training.context_len TRAINING.CONTEXT_LEN
Max length allowed for each sequence
--training.varlen Whether to take sequences of variable length as input
--training.warmup_steps TRAINING.WARMUP_STEPS
Steps for lr scheduler warmup, normally 1/5 of
--training.steps
--training.gradient_accumulation_steps TRAINING.GRADIENT_ACCUMULATION_STEPS
Number of steps to accumulate gradients before
updating parameters
--training.steps TRAINING.STEPS
How many train steps to run
--training.max_norm TRAINING.MAX_NORM
Max norm for gradient clipping
--training.skip_nan_inf
Skip batch updates when NaN or INF gradients are
encountered during training
--training.dataset TRAINING.DATASET
Dataset to use, with comma separated values
--training.dataset_name TRAINING.DATASET_NAME
The name of the dataset config, with comma separated
values if provided
--training.dataset_split TRAINING.DATASET_SPLIT
Dataset split to use, with comma separated values if
provided
--training.data_dir TRAINING.DATA_DIR
Data dirs to use, with comma separated values if
provided
--training.data_files TRAINING.DATA_FILES
Data files to use, with comma separated values if
provided
--training.data_probs TRAINING.DATA_PROBS
Data sampling probabilities, with comma separated
values if provided
--training.streaming Whether to load dataset in streaming mode, used for
huge dataset
--training.num_workers TRAINING.NUM_WORKERS
Number of subprocesses to use for data loading. 0
means that the data will be loaded in the main
process.
--training.prefetch_factor TRAINING.PREFETCH_FACTOR
Number of batches loaded in advance by each worker.2
means there will be a total of 2 * num_workers batches
prefetched across all workers.
--training.data_parallel_replicate_degree TRAINING.DATA_PARALLEL_REPLICATE_DEGREE
The `data_parallel_replicate_degree` argument
specifies the degree of data parallelism for weight
replication. When this value is greater than 1,
weights will be replicated across
`data_parallel_replicate_degree` ranks. If
`data_parallel_shard_degree` is also greater than 1,
the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is DDP (Distributed Data Parallelism). 1 means
disabled.
--training.data_parallel_shard_degree TRAINING.DATA_PARALLEL_SHARD_DEGREE
The `data_parallel_shard_degree` argument specifies
the degree of data parallelism for weight sharding.
When this value is greater than 1, weights will be
sharded across `data_parallel_shard_degree` ranks. If
`data_parallel_replicate_degree` is also greater than
1, the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is FSDP (Fully Sharded Data Parallelism). -1
means leftover ranks will be used (After
DP_REPLICATE/SP/PP). Note that only
`data_parallel_shard_degree` can be negative. 1 means
disabled.
--training.enable_cpu_offload
Whether to apply CPU offloading of parameters,
gradients, and optimizer states in FSDP
--training.tensor_parallel_degree TRAINING.TENSOR_PARALLEL_DEGREE
Tensor Parallelism degree. 1 means disabled.
--training.disable_loss_parallel
Whether to apply loss parallel when sequence parallel
is enabled
--training.mixed_precision_param {bfloat16,float32}
torch dtype to use for parameters when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.mixed_precision_reduce {float32}
torch dtype to use for reductions when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.compile Whether to compile the model
--training.gc_freq TRAINING.GC_FREQ
Python garbage control scheduling interval, in steps
--training.seed TRAINING.SEED
Choose the base RNG seed used for training
--training.deterministic
Use deterministic algorithms wherever possible, may be
slower
--metrics.log_freq METRICS.LOG_FREQ
How often to log metrics to TensorBoard, in iterations
--metrics.enable_tensorboard
Whether to log metrics to TensorBoard
--metrics.disable_color_printing
Whether to disable color printing in logs
--metrics.save_tb_folder METRICS.SAVE_TB_FOLDER
Folder to dump TensorBoard states
--metrics.rank_0_only
Whether to save TensorBoard metrics only for rank 0 or
for all ranks. When pipeline_parallel_degree is > 1,
this option uses the 0th rank of the last stage
pipeline group, which is the only stage that computes
loss metrics.
--metrics.enable_wandb
Whether to log metrics to Weights & Biases
--experimental.enable_async_tensor_parallel
Whether to apply async tensor parallel (currently only
effective when compile is enabled)
--experimental.pipeline_parallel_degree EXPERIMENTAL.PIPELINE_PARALLEL_DEGREE
Pipeline Parallelism degree, or number of ranks. 1
means disabled. If using looped schedules, this still
specifies the number of physical ranks, not the number
of stages. Stages per rank are inferred from split
points degree, and schedule.
--experimental.pipeline_parallel_split_points EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS [EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS ...]
Specify comma-separated names of modules to use as the
beginning of a split point. e.g. "layers.0,layers.2"
will cause the model to be split into 3 stages, the
first containing all the layers up to layers.0, the
second containing layers.0 and up to layers.2, the
third containing layers.2 and all the remaining
layers. Note: fully-automated splitting may be enabled
in the future, but currently the split points must be
specified manually.
--experimental.pipeline_parallel_schedule EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE
Specify the Pipeline Parallel schedule to use. The
supported schedules are: https://github.com/pytorch/py
torch/blob/de4c2a3b4e89d96334dc678d1c3f2ae51a6630a0/to
rch/distributed/pipelining/schedules.py#L2161. The
schedule must be compatible with the split points and
stages_per_rank. Looped schedules (e.g.
Interleaved1F1B) require specifying
pipeline_parallel_degree = number of ranks, and
split_points = number of stages - 1
--experimental.pipeline_parallel_schedule_csv EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE_CSV
Specify the path to the pipeline parallel schedule csv
file to use. The pipeline_parallel_schedule argument
must be either PipelineScheduleSingle,
PipelineScheduleMulti, or _PipelineScheduleRuntime.
--experimental.pipeline_parallel_microbatches EXPERIMENTAL.PIPELINE_PARALLEL_MICROBATCHES
How many microbatches to split the global training
batch into when using pipeline parallelism. The global
training batch size must be evenly divisible by the
number of microbatches. The default value will be the
number of pipeline stages, if unspecified.
--experimental.enable_compiled_autograd
Enable CompiledAutograd to compile the backward.
--experimental.context_parallel_degree EXPERIMENTAL.CONTEXT_PARALLEL_DEGREE
Context parallelism degree. 1 means disabled.
--experimental.context_parallel_rotate_method EXPERIMENTAL.CONTEXT_PARALLEL_ROTATE_METHOD
The collective to use in context parallel SDPA for kv
shards exchange. 'allgather' means to all-gather all
kv shards on ranks after the first sub-SDPA
computation, 'alltoall' means to all-to-all shuffle
the kv shards. The default value is 'allgather'.
--checkpoint.enable_checkpoint
Whether to enable checkpoint
--checkpoint.folder CHECKPOINT.FOLDER
The folder to store the checkpoints. When
enable_checkpoint is set to true, checkpoints will be
in {--job.dump_folder}/{--checkpoint.folder}.
--checkpoint.interval_type CHECKPOINT.INTERVAL_TYPE
Checkpointing interval unit of measurement ['step',
'seconds']
--checkpoint.interval CHECKPOINT.INTERVAL
Checkpointing interval, in steps or seconds depending
on --checkpoint.interval_type
--checkpoint.model_weights_only
When model_weights_only=True, only model weights will
be saved at the end of training. With this,
checkpoints can be loaded using `torch.load(...,
weights_only=True)` after conversion. When
model_weights_only=False, the full checkpoint will be
saved. A full checkpoint includes model, optimizer and
train_state, which can be used to resume training. The
default value is false.
--checkpoint.export_dtype {float16,bfloat16,float32}
Converts to the specified precision when training
completes and model_weights_only=true. Currently
supports float32, float16, and bfloat16. The default
value is float32.
--checkpoint.create_seed_checkpoint
Initializes the full model without applying
parallelisms, and then saves it as a seed checkpoint.
Note: requires user to call train.py without
specifying any parallelisms, e.g. NGPU=1. Could be
implemented as a separate script, but this way shares
more code.
--checkpoint.async_mode CHECKPOINT.ASYNC_MODE
Which async checkpoint mode to use. Currently there
are 3 different modes. 1. "disabled": synchronized
checkpointing will be used. 2. "async":
torch.distributed.checkpoint.async_save will be used.
1. "async_with_pinned_mem": this option utilizes a
dedicated pinned memory space and creates a separate
process for faster GPU->CPU transfer performance and
eliminating GIL contention. The cost is increased CPU
memory usage. If insufficient CPU memory is available,
performance may degrade due to memory paging. For most
users, "async" should suffice as the performance
overhead is typically small (on the order of tens of
seconds) compared to checkpointing frequency. This
mode can be employed to pursue near-zero checkpointing
times (e.g., < 1 second) given appropriate hardware
support such as ample CPU memory and fast PCIe.
"disabled" is the default mode.
--checkpoint.keep_latest_k CHECKPOINT.KEEP_LATEST_K
Keeps only the latest k checkpoints, and purging older
ones. If 0, keep all checkpoints. 0 is the default
value.
--checkpoint.load_step CHECKPOINT.LOAD_STEP
Load the checkpoint at the specified step. If -1, load
the latest checkpoint.
--float8.enable_float8_linear
If true, swaps `torch.nn.Linear` with `Float8Linear`.
This feature requires you to install 'torchao' which
can be found here: https://github.com/pytorch/ao
--float8.enable_fsdp_float8_all_gather
Whether enable float8 all-gather in FSDP
--float8.precompute_float8_dynamic_scale_for_fsdp
Whether precompute float8 scales dynamically for FSDP
--float8.scaling_type_input {dynamic,delayed}
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_weight FLOAT8.SCALING_TYPE_WEIGHT
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_grad_output FLOAT8.SCALING_TYPE_GRAD_OUTPUT
float8 scaling for input, dynamic (default) or delayed
--comm.init_timeout_seconds COMM.INIT_TIMEOUT_SECONDS
Timeout for communication operations, during
initialization and first train step.
--comm.train_timeout_seconds COMM.TRAIN_TIMEOUT_SECONDS
Timeout for communication operations after the first
train step -- usually a tighter bound than during
initialization.
--comm.trace_buf_size COMM.TRACE_BUF_SIZE
Flight recorder ring buffer size, >0 means recording
by default, 0 means disabled
--memory_estimation.enabled
Whether to estimate memory usage for FSDP
--memory_estimation.disable_fake_mode
Whether to estimate memory under FakeTensorMode
```
</details>
### Training with `torch.compile`
Starting from `torch 2.0`, `torch.compile` has been introduced as a new feature to seamlessly accelerate training processes.
In `flame`, one can simply enable `torch.compile` by adding `--training.compile` flag to your training script.
However, `fla` has integrated numerous fused kernels for acceleration, which may potentially conflict with `torch.compile`.
We are actively working on resolving these issues to make compilation transparent to users.
In the meantime, please ensure you are using the latest dependencies.
Specifically, **we recommend using `torch>=2.6` and `triton>=3.0`**.
### Training with multiple datasets
If you wish to train a model with all-round capabilities (e.g., code, math, and multilingual ability), it's necessary to train on multiple datasets.
`flame` allows training with multiple datasets easily.
For example, you can specify the following arguments to train on 6 datasets with different proportions:
```sh
--training.dataset HuggingFaceFW/fineweb-edu,opencsg/Fineweb-Edu-Chinese-V2.1,OpenCoder-LLM/opc-fineweb-code-corpus,math-ai/AutoMathText,EleutherAI/proof-pile-2,OpenCoder-LLM/opc-fineweb-math-corpus \
--training.data_probs 0.6,0.15,0.15,0.014,0.058,0.028 \
```
### ~Finalizing training~
> [!NOTE]
> We have done this conversion automatically in the training script since our latest updates.
Once training is complete, you may want to convert the distributed checkpoints (DCPs) into the 🤗 format for broader use.
To facilitate this, we provide a straightforward conversion script:
```sh
python -m flame.utils.convert_dcp_to_hf --path <path_to_model> --step <step> --config <path_to_config> --tokenizer <path_to_tokenizer>
```
After this, your model will be in the 🤗 format, ready to be shared or deployed.
You can then easily publish your model using the `huggingface_hub` for wider accessibility.
### Continual training
If you wish to build upon a strong pre-trained model (in 🤗 format) and continue training, we also offer a script to convert the 🤗 format model back into DCP format.
This allows you to seamlessly resume training with `flame`.
```sh
python -m flame.utils.convert_hf_to_dcp --model <path_to_hf> --checkpoint <path_to_dcp/checkpoint/step-0>
```
Here, `<path_to_dcp>` is the directory where your distributed checkpoints will be stored.
The checkpoint is intentionally saved at `<step-0>` within the checkpoint folder to ensure it is loadable by `flame` during the initial training step, similar to how a seed checkpoint is handled.
Once the conversion is complete, you can proceed with training using `flame` as usual, continuing from where the pretrained model left off.
## Multi-node training
If you have access to multi-node GPUs, consider leveraging them for optimal performance.
This process is straightforward and well-documented in the PyTorch [docs](https://pytorch.org/docs/stable/elastic/run.html).
To set up multi-node training:
* Set the environment variables `MASTER_ADDR=<ip>` and `MASTER_PORT=<port>` before running the training script across all nodes.
* If you're using a job scheduler like Slurm, it will handle these variables for you.
`torchtitan` provides a [Slurm script](https://github.com/pytorch/torchtitan/blob/main/multinode_trainer.slurm) for multi-node training, which you can use as a reference or starting point.
|
amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B
|
amphion
| 2025-09-02T07:24:07Z | 56 | 6 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"Speech-Tokenizer",
"text-to-speech",
"en",
"zh",
"ja",
"fr",
"de",
"ko",
"arxiv:2508.16790",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-08-22T21:05:07Z |
---
language:
- en
- zh
- ja
- fr
- de
- ko
library_name: transformers
license: apache-2.0
pipeline_tag: text-to-speech
tags:
- Speech-Tokenizer
---
# 🚀 TaDiCodec
We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
[](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
[](https://arxiv.org/abs/2508.16790)
[](https://tadicodec.github.io/)
[](https://www.python.org/)
[](https://pytorch.org/)
[](https://huggingface.co/amphion/TaDiCodec)
# 🤗 Pre-trained Models
## 📦 Model Zoo - Ready to Use!
*Download our pre-trained models for instant inference*
## 🎵 TaDiCodec
| Model | 🤗 Hugging Face | 👷 Status |
|:-----:|:---------------:|:------:|
| **🚀 TaDiCodec** | [](https://huggingface.co/amphion/TaDiCodec) | ✅ |
| **🚀 TaDiCodec-old** | [](https://huggingface.co/amphion/TaDiCodec-old) | 🚧 |
*Note: TaDiCodec-old is the old version of TaDiCodec, the TaDiCodec-TTS-AR-Phi-3.5-4B is based on TaDiCodec-old.*
## 🎤 TTS Models
| Model | Type | LLM | 🤗 Hugging Face | 👷 Status |
|:-----:|:----:|:---:|:---------------:|:-------------:|
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-0.5B** | AR | Qwen2.5-0.5B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-3B** | AR | Qwen2.5-3B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-3B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Phi-3.5-4B** | AR | Phi-3.5-mini-instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Phi-3.5-4B) | 🚧 |
| **🌊 TaDiCodec-TTS-MGM** | MGM | - | [](https://huggingface.co/amphion/TaDiCodec-TTS-MGM) | ✅ |
## 🔧 Quick Model Usage
```python
# 🤗 Load from Hugging Face
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
# Load TaDiCodec tokenizer, it will automatically download the model from Hugging Face for the first time
tokenizer = TaDiCodecPipline.from_pretrained("amphion/TaDiCodec")
# Load AR TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = TTSInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-AR-Qwen2.5-3B")
# Load MGM TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = MGMInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-MGM")
```
# 🚀 Quick Start
## Installation
```bash
# Clone the repository
git clone https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer.git
cd Diffusion-Speech-Tokenizer
# Install dependencies
bash env.sh
```
## Basic Usage
**Please refer to the [use_examples](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer/tree/main/use_examples) folder for more detailed usage examples.**
### Speech Tokenization and Reconstruction
```python
# Example: Using TaDiCodec for speech tokenization
import torch
import soundfile as sf
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipe = TaDiCodecPipline.from_pretrained(ckpt_dir="./ckpt/TaDiCodec", device=device)
# Text of the prompt audio
prompt_text = "In short, we embarked on a mission to make America great again, for all Americans."
# Text of the target audio
target_text = "But to those who knew her well, it was a symbol of her unwavering determination and spirit."
# Input audio path of the prompt audio
prompt_speech_path = "./use_examples/test_audio/trump_0.wav"
# Input audio path of the target audio
speech_path = "./use_examples/test_audio/trump_1.wav"
rec_audio = pipe(
text=target_text,
speech_path=speech_path,
prompt_text=prompt_text,
prompt_speech_path=prompt_speech_path
)
sf.write("./use_examples/test_audio/trump_rec.wav", rec_audio, 24000)
```
### Zero-shot TTS with TaDiCodec
```python
import torch
import soundfile as sf
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
# from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create AR TTS pipeline
pipeline = TTSInferencePipeline.from_pretrained(
tadicodec_path="./ckpt/TaDiCodec",
llm_path="./ckpt/TaDiCodec-TTS-AR-Qwen2.5-3B",
device=device,
)
# Inference on single sample, you can also use the MGM TTS pipeline
audio = pipeline(
text="但是 to those who 知道 her well, it was a 标志 of her unwavering 决心 and spirit.", # code-switching cases are supported
prompt_text="In short, we embarked on a mission to make America great again, for all Americans.",
prompt_speech_path="./use_examples/test_audio/trump_0.wav",
)
sf.write("./use_examples/test_audio/lm_tts_output.wav", audio, 24000)
```
# 📚 Citation
If you find this repository useful, please cite our paper:
TaDiCodec:
```bibtex
@article{tadicodec2025,
title={TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling},
author={Yuancheng Wang, Dekun Chen, Xueyao Zhang, Junan Zhang, Jiaqi Li, Zhizheng Wu},
journal={arXiv preprint},
year={2025},
url={https://arxiv.org/abs/2508.16790}
}
```
Amphion:
```bibtex
@inproceedings{amphion,
author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
year={2024}
}
```
MaskGCT:
```bibtex
@inproceedings{wang2024maskgct,
author={Wang, Yuancheng and Zhan, Haoyue and Liu, Liwei and Zeng, Ruihong and Guo, Haotian and Zheng, Jiachen and Zhang, Qiang and Zhang, Xueyao and Zhang, Shunsi and Wu, Zhizheng},
title={MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2025}
}
```
# 🙏 Acknowledgments
- **MGM-based TTS** is built upon [MaskGCT](https://github.com/open-mmlab/Amphion/tree/main/models/tts/maskgct).
- **Vocos vocoder** is built upon [Vocos](https://github.com/gemelo-ai/vocos).
- **NAR Llama-style transformers** is built upon [transformers](https://github.com/huggingface/transformers).
- **(Binary Spherical Quantization) BSQ** is built upon [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) and [bsq-vit](https://github.com/zhaoyue-zephyrus/bsq-vit).
- **Training codebase** is built upon [Amphion](https://github.com/open-mmlab/Amphion) and [accelerate](https://github.com/huggingface/accelerate).
|
vendi11/blockassist-bc-placid_placid_llama_1756797747
|
vendi11
| 2025-09-02T07:23:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:23:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756796098
|
coelacanthxyz
| 2025-09-02T07:23:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:23:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756797614
|
omerbektass
| 2025-09-02T07:20:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:20:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-swagen-combined-15hrs-62
|
csikasote
| 2025-09-02T07:20:34Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-30T22:16:28Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-combined-15hrs-62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-combined-15hrs-62
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Wer: 0.2282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.9911 | 0.1594 | 200 | 2.1622 | 0.9834 |
| 1.6509 | 0.3189 | 400 | 0.3235 | 0.2094 |
| 1.2654 | 0.4783 | 600 | 0.3102 | 0.2190 |
| 1.249 | 0.6377 | 800 | 0.3162 | 0.2188 |
| 1.1663 | 0.7971 | 1000 | 0.3062 | 0.2192 |
| 1.1795 | 0.9566 | 1200 | 0.3083 | 0.2244 |
| 1.1537 | 1.1156 | 1400 | 0.3119 | 0.2277 |
| 1.1607 | 1.2750 | 1600 | 0.3050 | 0.2282 |
| 1.08 | 1.4344 | 1800 | 0.3056 | 0.2293 |
| 1.1027 | 1.5939 | 2000 | 0.3099 | 0.2333 |
| 1.0601 | 1.7533 | 2200 | 0.3063 | 0.2340 |
| 1.0959 | 1.9127 | 2400 | 0.3070 | 0.2333 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756797576
|
2hpsatt
| 2025-09-02T07:20:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:20:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF
|
mradermacher
| 2025-09-02T07:20:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"lora",
"sft",
"trl",
"unsloth",
"fine-tuned",
"en",
"dataset:theprint/Empathetic-Alpaca",
"base_model:theprint/Empathetic-Llama-3.2-3B-Instruct",
"base_model:adapter:theprint/Empathetic-Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T05:59:04Z |
---
base_model: theprint/Empathetic-Llama-3.2-3B-Instruct
datasets:
- theprint/Empathetic-Alpaca
language: en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lora
- sft
- transformers
- trl
- unsloth
- fine-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/theprint/Empathetic-Llama-3.2-3B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Empathetic-Llama-3.2-3B-Instruct-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Empathetic-Llama-3.2-3B-Instruct-GGUF/resolve/main/Empathetic-Llama-3.2-3B-Instruct.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
csikasote/mms-1b-all-swagen-combined-15hrs-62-DAT
|
csikasote
| 2025-09-02T07:18:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-02T06:54:09Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-combined-15hrs-62-DAT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-combined-15hrs-62-DAT
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3091
- Wer: 0.2227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.9949 | 0.1594 | 200 | 2.1711 | 0.9835 |
| 1.653 | 0.3189 | 400 | 0.3278 | 0.2094 |
| 1.2653 | 0.4783 | 600 | 0.3203 | 0.2166 |
| 1.2496 | 0.6377 | 800 | 0.3253 | 0.2196 |
| 1.1674 | 0.7971 | 1000 | 0.3110 | 0.2216 |
| 1.1796 | 0.9566 | 1200 | 0.3091 | 0.2228 |
| 1.1576 | 1.1156 | 1400 | 0.3193 | 0.2265 |
| 1.1601 | 1.2750 | 1600 | 0.3150 | 0.2305 |
| 1.0825 | 1.4344 | 1800 | 0.3119 | 0.2289 |
| 1.104 | 1.5939 | 2000 | 0.3189 | 0.2347 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Novachrono93/Lazypos
|
Novachrono93
| 2025-09-02T07:18:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"base_model:adapter:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"region:us"
] |
text-to-image
| 2025-09-02T06:57:13Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1000041480.jpg
text: "UNICODE\0\0{\0\"\02\03\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0M\0o\0d\0e\0l\0L\0o\0a\0d\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0m\0o\0d\0e\0l\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0U\0p\0s\0c\0a\0l\0e\0 \0M\0o\0d\0e\0l\0\"\0}\0}\0,\0\"\02\06\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0L\0o\0a\0d\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0i\0m\0a\0g\0e\0\"\0:\0\"\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\0K\0G\0K\06\0Z\0J\00\0F\00\0Y\0C\0Q\0W\0T\0G\0M\0E\0K\0M\0B\0H\0Q\0K\0K\0R\00\0\"\0,\0\"\0u\0p\0l\0o\0a\0d\0\"\0:\0\"\0i\0m\0a\0g\0e\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\02\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0U\0p\0s\0c\0a\0l\0e\0W\0i\0t\0h\0M\0o\0d\0e\0l\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\02\03\0\"\0,\00\0]\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\06\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0 \0(\0u\0s\0i\0n\0g\0 \0M\0o\0d\0e\0l\0)\0\"\0}\0}\0,\0\"\02\04\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0S\0c\0a\0l\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0e\0t\0h\0o\0d\0\"\0:\0\"\0b\0i\0l\0i\0n\0e\0a\0r\0\"\0,\0\"\0c\0r\0o\0p\0\"\0:\0\"\0d\0i\0s\0a\0b\0l\0e\0d\0\"\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\02\02\04\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\03\08\04\00\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\02\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\01\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0S\0a\0v\0e\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0f\0i\0l\0e\0n\0a\0m\0e\0_\0p\0r\0e\0f\0i\0x\0\"\0:\0\"\0C\0o\0m\0f\0y\0U\0I\0\"\0,\0\"\0i\0m\0a\0g\0e\0s\0\"\0:\0[\0\"\02\04\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0S\0a\0v\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\0e\0x\0t\0r\0a\0\"\0:\0{\0\"\0a\0i\0r\0s\0\"\0:\0[\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0]\0}\0,\0\"\0e\0x\0t\0r\0a\0M\0e\0t\0a\0d\0a\0t\0a\0\"\0:\0\"\0{\0\\\0u\00\00\02\02\0p\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0a\0z\0y\0p\0o\0s\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0r\0e\0f\0i\0n\0e\0d\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0c\0i\0n\0e\0m\0a\0t\0i\0c\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0 \08\0k\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0e\0s\0t\0 \0d\0e\0f\0i\0n\0i\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0s\0c\0e\0n\0e\0r\0y\0,\0 \0s\0o\0u\0r\0c\0e\0_\0a\0n\0i\0m\0e\0,\0 \0a\0b\0s\0u\0r\0d\0_\0r\0e\0s\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0e\0w\0e\0s\0t\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\08\0K\0,\0H\0D\0R\0,\0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\04\0u\0r\04\0_\0i\0l\0l\0u\0,\0s\0h\0i\0n\0y\0,\0 \04\0u\0r\04\0_\0i\0l\0l\0u\0,\01\0g\0i\0r\0l\0,\0 \0h\0a\0i\0r\0 \0o\0r\0n\0a\0m\0e\0n\0t\0,\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0a\0i\0r\0 \0f\0l\0o\0w\0e\0r\0,\0 \0s\0w\0o\0r\0d\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0f\0l\0o\0w\0e\0r\0,\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0s\0o\0l\0o\0,\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0s\0w\0o\0r\0d\0,\0 \0f\0l\0o\0r\0a\0l\0 \0p\0r\0i\0n\0t\0,\0 \0k\0a\0t\0a\0n\0a\0,\0 \0r\0i\0n\0g\0,\0 \0b\0a\0n\0g\0s\0,\0 \0j\0e\0w\0e\0l\0r\0y\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0c\0l\0o\0s\0e\0d\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0u\0s\0h\0,\0 \0g\0l\0i\0n\0t\0,\0 \0p\0e\0t\0a\0l\0s\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0r\0e\0d\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0i\0d\0e\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0b\0o\0o\0t\0s\0,\0 \0o\0b\0i\0,\0m\0o\0t\0i\0o\0n\0 \0b\0l\0u\0r\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0d\0u\0t\0c\0h\0 \0a\0n\0g\0l\0e\0,\0 \0d\0e\0p\0t\0h\0 \0o\0f\0 \0f\0i\0e\0l\0d\0,\0w\0h\0i\0t\0e\0_\0t\0h\0e\0m\0e\0,\0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0d\0,\0f\0o\0r\0e\0s\0h\0o\0r\0t\0e\0n\0i\0n\0g\0,\0b\0l\0u\0r\0r\0y\0 \0e\0d\0g\0e\0s\0,\0(\0(\04\0k\0,\08\0k\0,\0U\0l\0t\0r\0a\0 \0H\0D\0)\0)\0,\0 \0(\0(\0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0B\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0:\01\0.\04\0)\0)\0,\0(\0(\0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0:\01\0.\04\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0:\01\0.\04\0)\0)\0,\0 \0d\0i\0g\0i\0t\0a\0l\0 \0a\0r\0t\0,\0 \0s\0u\0r\0r\0e\0a\0l\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0a\0t\0m\0o\0s\0p\0h\0e\0r\0e\0,\0 \0F\0l\0a\0t\0l\0i\0n\0e\0,\0 \0F\0l\0a\0t\0 \0v\0e\0c\0t\0o\0r\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0n\0e\0g\0a\0t\0i\0v\0e\0P\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0o\0r\0m\0a\0l\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0j\0p\0e\0g\0 \0a\0r\0t\0i\0f\0a\0c\0t\0s\0,\0 \0s\0i\0g\0n\0a\0t\0u\0r\0e\0,\0 \0w\0a\0t\0e\0r\0m\0a\0r\0k\0,\0 \0t\0e\0x\0t\0,\0 \0e\0r\0r\0o\0r\0,\0 \0b\0l\0u\0r\0r\0y\0,\0 \0u\0g\0l\0y\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0,\0 \0s\0k\0e\0t\0c\0h\0,\0 \0m\0o\0n\0o\0c\0h\0r\0o\0m\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0l\0i\0m\0b\0s\0,\0 \0d\0e\0f\0o\0r\0m\0e\0d\0,\0 \0d\0i\0s\0t\0o\0r\0t\0e\0d\0 \0f\0a\0c\0e\0,\0 \0b\0a\0d\0 \0p\0r\0o\0p\0o\0r\0t\0i\0o\0n\0s\0,\0 \0o\0u\0t\0 \0o\0f\0 \0f\0o\0c\0u\0s\0,\0 \0c\0l\0o\0n\0e\0d\0 \0f\0a\0c\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0h\0a\0n\0d\0s\0,\0 \0d\0i\0s\0f\0i\0g\0u\0r\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0a\0r\0m\0s\0,\0 \0e\0x\0t\0r\0a\0 \0l\0e\0g\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0a\0r\0m\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0l\0e\0g\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0d\0i\0s\0c\0o\0n\0n\0e\0c\0t\0e\0d\0 \0l\0i\0m\0b\0s\0,\0 \0l\0o\0n\0g\0 \0n\0e\0c\0k\0,\0 \0b\0a\0d\0 \0a\0n\0a\0t\0o\0m\0y\0,\0 \0b\0a\0d\0 \0h\0a\0n\0d\0s\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0 \0h\0a\0n\0d\0s\0,\0 \0b\0a\0d\0 \0f\0e\0e\0t\0,\0 \0e\0x\0t\0r\0a\0 \0e\0y\0e\0s\0,\0 \0g\0l\0i\0t\0c\0h\0e\0d\0 \0e\0y\0e\0s\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0e\0y\0e\0s\0,\0 \0c\0r\0o\0s\0s\0-\0e\0y\0e\0,\0 \0a\0s\0y\0m\0m\0e\0t\0r\0i\0c\0a\0l\0 \0e\0y\0e\0s\0,\0 \0r\0o\0u\0g\0h\0 \0s\0k\0e\0t\0c\0h\0,\0 \0d\0i\0s\0c\0o\0l\0o\0r\0e\0d\0,\0 \0l\0a\0z\0y\0n\0e\0g\0,\0 \0l\0a\0z\0y\0h\0a\0n\0d\0 \0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0c\0f\0g\0S\0c\0a\0l\0e\0\\\0u\00\00\02\02\0:\04\0,\0\\\0u\00\00\02\02\0s\0a\0m\0p\0l\0e\0r\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0E\0u\0l\0e\0r\0 \0a\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0e\0e\0d\0\\\0u\00\00\02\02\0:\01\03\00\04\02\04\00\07\09\00\0,\0\\\0u\00\00\02\02\0c\0l\0i\0p\0S\0k\0i\0p\0\\\0u\00\00\02\02\0:\02\0,\0\\\0u\00\00\02\02\0s\0t\0e\0p\0s\0\\\0u\00\00\02\02\0:\02\05\0,\0\\\0u\00\00\02\02\0q\0u\0a\0n\0t\0i\0t\0y\0\\\0u\00\00\02\02\0:\01\0,\0\\\0u\00\00\02\02\0a\0s\0p\0e\0c\0t\0R\0a\0t\0i\0o\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\01\03\0:\01\09\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0b\0a\0s\0e\0M\0o\0d\0e\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0I\0l\0l\0u\0s\0t\0r\0i\0o\0u\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0,\0\\\0u\00\00\02\02\0d\0e\0n\0o\0i\0s\0e\0\\\0u\00\00\02\02\0:\00\0.\05\05\0,\0\\\0u\00\00\02\02\0w\0o\0r\0k\0f\0l\0o\0w\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0-\0h\0i\0r\0e\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0e\0x\0p\0e\0r\0i\0m\0e\0n\0t\0a\0l\0\\\0u\00\00\02\02\0:\0f\0a\0l\0s\0e\0,\0\\\0u\00\00\02\02\0p\0r\0i\0o\0r\0i\0t\0y\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0o\0u\0r\0c\0e\0I\0m\0a\0g\0e\0\\\0u\00\00\02\02\0:\0{\0\\\0u\00\00\02\02\0u\0r\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\01\02\08\08\06\06\02\01\05\03\09\05\06\09\09\01\07\05\06\08\03\05\08\09\02\09\05\09\04\08\09\04\02\01\05\04\05\02\02\0.\0j\0p\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0}\0,\0\\\0u\00\00\02\02\0d\0i\0s\0a\0b\0l\0e\0P\0o\0i\0\\\0u\00\00\02\02\0:\0t\0r\0u\0e\0,\0\\\0u\00\00\02\02\0p\0r\0o\0c\0e\0s\0s\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0r\0e\0s\0o\0u\0r\0c\0e\0s\0\\\0u\00\00\02\02\0:\0[\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\07\02\06\04\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\00\09\05\07\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\02\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\03\03\01\05\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\06\00\07\04\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\05\05\00\08\04\00\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\06\06\06\09\01\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\08\0}\0]\0}\0\"\0}"
base_model: dhead/wai-nsfw-illustrious-sdxl-v140-sdxl
instance_prompt: lazypos
---
# Lazy Embeddings
<Gallery />
## Trigger words
You should use `lazypos` to trigger the image generation.
## Download model
[Download](/Novachrono93/Lazypos/tree/main) them in the Files & versions tab.
|
bah63843/blockassist-bc-plump_fast_antelope_1756797439
|
bah63843
| 2025-09-02T07:18:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:18:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756797328
|
akirafudo
| 2025-09-02T07:15:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:15:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756797268
|
vendi11
| 2025-09-02T07:15:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:15:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erik-svensson-cm/whisper-large-v3-ct2
|
erik-svensson-cm
| 2025-09-02T07:12:30Z | 0 | 0 | null |
[
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T07:05:06Z |
---
license: apache-2.0
base_model:
- openai/whisper-large-v3
---
|
csikasote/mms-1b-all-swagen-combined-15hrs-42
|
csikasote
| 2025-09-02T07:12:08Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-30T11:50:24Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-combined-15hrs-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-combined-15hrs-42
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Wer: 0.2165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 7.1631 | 0.1594 | 200 | 2.3619 | 1.0 |
| 1.7227 | 0.3189 | 400 | 0.3239 | 0.2116 |
| 1.297 | 0.4783 | 600 | 0.3251 | 0.2131 |
| 1.2198 | 0.6377 | 800 | 0.2994 | 0.2166 |
| 1.2152 | 0.7971 | 1000 | 0.3089 | 0.2197 |
| 1.1905 | 0.9566 | 1200 | 0.3024 | 0.2238 |
| 1.1545 | 1.1156 | 1400 | 0.3078 | 0.2279 |
| 1.1572 | 1.2750 | 1600 | 0.2999 | 0.2308 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
arturkakraft/blockassist-bc-arctic_purring_camel_1756795822
|
arturkakraft
| 2025-09-02T07:09:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic purring camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:09:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic purring camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756795136
|
Loder-S
| 2025-09-02T07:03:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:03:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756796474
|
liukevin666
| 2025-09-02T07:02:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:02:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756796472
|
omerbektass
| 2025-09-02T07:01:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:01:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756796416
|
matherchodhuuu
| 2025-09-02T07:01:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:01:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756796430
|
bah63843
| 2025-09-02T07:01:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Novachrono93/Lazyneg
|
Novachrono93
| 2025-09-02T06:59:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"base_model:adapter:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"region:us"
] |
text-to-image
| 2025-09-02T06:59:20Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1000041480.jpg
text: "UNICODE\0\0{\0\"\02\03\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0M\0o\0d\0e\0l\0L\0o\0a\0d\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0m\0o\0d\0e\0l\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0U\0p\0s\0c\0a\0l\0e\0 \0M\0o\0d\0e\0l\0\"\0}\0}\0,\0\"\02\06\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0L\0o\0a\0d\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0i\0m\0a\0g\0e\0\"\0:\0\"\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\0K\0G\0K\06\0Z\0J\00\0F\00\0Y\0C\0Q\0W\0T\0G\0M\0E\0K\0M\0B\0H\0Q\0K\0K\0R\00\0\"\0,\0\"\0u\0p\0l\0o\0a\0d\0\"\0:\0\"\0i\0m\0a\0g\0e\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\02\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0U\0p\0s\0c\0a\0l\0e\0W\0i\0t\0h\0M\0o\0d\0e\0l\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\02\03\0\"\0,\00\0]\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\06\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0 \0(\0u\0s\0i\0n\0g\0 \0M\0o\0d\0e\0l\0)\0\"\0}\0}\0,\0\"\02\04\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0S\0c\0a\0l\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0e\0t\0h\0o\0d\0\"\0:\0\"\0b\0i\0l\0i\0n\0e\0a\0r\0\"\0,\0\"\0c\0r\0o\0p\0\"\0:\0\"\0d\0i\0s\0a\0b\0l\0e\0d\0\"\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\02\02\04\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\03\08\04\00\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\02\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\01\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0S\0a\0v\0e\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0f\0i\0l\0e\0n\0a\0m\0e\0_\0p\0r\0e\0f\0i\0x\0\"\0:\0\"\0C\0o\0m\0f\0y\0U\0I\0\"\0,\0\"\0i\0m\0a\0g\0e\0s\0\"\0:\0[\0\"\02\04\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0S\0a\0v\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\0e\0x\0t\0r\0a\0\"\0:\0{\0\"\0a\0i\0r\0s\0\"\0:\0[\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0]\0}\0,\0\"\0e\0x\0t\0r\0a\0M\0e\0t\0a\0d\0a\0t\0a\0\"\0:\0\"\0{\0\\\0u\00\00\02\02\0p\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0a\0z\0y\0p\0o\0s\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0r\0e\0f\0i\0n\0e\0d\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0c\0i\0n\0e\0m\0a\0t\0i\0c\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0 \08\0k\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0e\0s\0t\0 \0d\0e\0f\0i\0n\0i\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0s\0c\0e\0n\0e\0r\0y\0,\0 \0s\0o\0u\0r\0c\0e\0_\0a\0n\0i\0m\0e\0,\0 \0a\0b\0s\0u\0r\0d\0_\0r\0e\0s\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0e\0w\0e\0s\0t\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\08\0K\0,\0H\0D\0R\0,\0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\04\0u\0r\04\0_\0i\0l\0l\0u\0,\0s\0h\0i\0n\0y\0,\0 \04\0u\0r\04\0_\0i\0l\0l\0u\0,\01\0g\0i\0r\0l\0,\0 \0h\0a\0i\0r\0 \0o\0r\0n\0a\0m\0e\0n\0t\0,\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0a\0i\0r\0 \0f\0l\0o\0w\0e\0r\0,\0 \0s\0w\0o\0r\0d\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0f\0l\0o\0w\0e\0r\0,\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0s\0o\0l\0o\0,\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0s\0w\0o\0r\0d\0,\0 \0f\0l\0o\0r\0a\0l\0 \0p\0r\0i\0n\0t\0,\0 \0k\0a\0t\0a\0n\0a\0,\0 \0r\0i\0n\0g\0,\0 \0b\0a\0n\0g\0s\0,\0 \0j\0e\0w\0e\0l\0r\0y\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0c\0l\0o\0s\0e\0d\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0u\0s\0h\0,\0 \0g\0l\0i\0n\0t\0,\0 \0p\0e\0t\0a\0l\0s\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0r\0e\0d\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0i\0d\0e\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0b\0o\0o\0t\0s\0,\0 \0o\0b\0i\0,\0m\0o\0t\0i\0o\0n\0 \0b\0l\0u\0r\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0d\0u\0t\0c\0h\0 \0a\0n\0g\0l\0e\0,\0 \0d\0e\0p\0t\0h\0 \0o\0f\0 \0f\0i\0e\0l\0d\0,\0w\0h\0i\0t\0e\0_\0t\0h\0e\0m\0e\0,\0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0d\0,\0f\0o\0r\0e\0s\0h\0o\0r\0t\0e\0n\0i\0n\0g\0,\0b\0l\0u\0r\0r\0y\0 \0e\0d\0g\0e\0s\0,\0(\0(\04\0k\0,\08\0k\0,\0U\0l\0t\0r\0a\0 \0H\0D\0)\0)\0,\0 \0(\0(\0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0B\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0:\01\0.\04\0)\0)\0,\0(\0(\0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0:\01\0.\04\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0:\01\0.\04\0)\0)\0,\0 \0d\0i\0g\0i\0t\0a\0l\0 \0a\0r\0t\0,\0 \0s\0u\0r\0r\0e\0a\0l\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0a\0t\0m\0o\0s\0p\0h\0e\0r\0e\0,\0 \0F\0l\0a\0t\0l\0i\0n\0e\0,\0 \0F\0l\0a\0t\0 \0v\0e\0c\0t\0o\0r\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0n\0e\0g\0a\0t\0i\0v\0e\0P\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0o\0r\0m\0a\0l\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0j\0p\0e\0g\0 \0a\0r\0t\0i\0f\0a\0c\0t\0s\0,\0 \0s\0i\0g\0n\0a\0t\0u\0r\0e\0,\0 \0w\0a\0t\0e\0r\0m\0a\0r\0k\0,\0 \0t\0e\0x\0t\0,\0 \0e\0r\0r\0o\0r\0,\0 \0b\0l\0u\0r\0r\0y\0,\0 \0u\0g\0l\0y\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0,\0 \0s\0k\0e\0t\0c\0h\0,\0 \0m\0o\0n\0o\0c\0h\0r\0o\0m\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0l\0i\0m\0b\0s\0,\0 \0d\0e\0f\0o\0r\0m\0e\0d\0,\0 \0d\0i\0s\0t\0o\0r\0t\0e\0d\0 \0f\0a\0c\0e\0,\0 \0b\0a\0d\0 \0p\0r\0o\0p\0o\0r\0t\0i\0o\0n\0s\0,\0 \0o\0u\0t\0 \0o\0f\0 \0f\0o\0c\0u\0s\0,\0 \0c\0l\0o\0n\0e\0d\0 \0f\0a\0c\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0h\0a\0n\0d\0s\0,\0 \0d\0i\0s\0f\0i\0g\0u\0r\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0a\0r\0m\0s\0,\0 \0e\0x\0t\0r\0a\0 \0l\0e\0g\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0a\0r\0m\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0l\0e\0g\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0d\0i\0s\0c\0o\0n\0n\0e\0c\0t\0e\0d\0 \0l\0i\0m\0b\0s\0,\0 \0l\0o\0n\0g\0 \0n\0e\0c\0k\0,\0 \0b\0a\0d\0 \0a\0n\0a\0t\0o\0m\0y\0,\0 \0b\0a\0d\0 \0h\0a\0n\0d\0s\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0 \0h\0a\0n\0d\0s\0,\0 \0b\0a\0d\0 \0f\0e\0e\0t\0,\0 \0e\0x\0t\0r\0a\0 \0e\0y\0e\0s\0,\0 \0g\0l\0i\0t\0c\0h\0e\0d\0 \0e\0y\0e\0s\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0e\0y\0e\0s\0,\0 \0c\0r\0o\0s\0s\0-\0e\0y\0e\0,\0 \0a\0s\0y\0m\0m\0e\0t\0r\0i\0c\0a\0l\0 \0e\0y\0e\0s\0,\0 \0r\0o\0u\0g\0h\0 \0s\0k\0e\0t\0c\0h\0,\0 \0d\0i\0s\0c\0o\0l\0o\0r\0e\0d\0,\0 \0l\0a\0z\0y\0n\0e\0g\0,\0 \0l\0a\0z\0y\0h\0a\0n\0d\0 \0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0c\0f\0g\0S\0c\0a\0l\0e\0\\\0u\00\00\02\02\0:\04\0,\0\\\0u\00\00\02\02\0s\0a\0m\0p\0l\0e\0r\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0E\0u\0l\0e\0r\0 \0a\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0e\0e\0d\0\\\0u\00\00\02\02\0:\01\03\00\04\02\04\00\07\09\00\0,\0\\\0u\00\00\02\02\0c\0l\0i\0p\0S\0k\0i\0p\0\\\0u\00\00\02\02\0:\02\0,\0\\\0u\00\00\02\02\0s\0t\0e\0p\0s\0\\\0u\00\00\02\02\0:\02\05\0,\0\\\0u\00\00\02\02\0q\0u\0a\0n\0t\0i\0t\0y\0\\\0u\00\00\02\02\0:\01\0,\0\\\0u\00\00\02\02\0a\0s\0p\0e\0c\0t\0R\0a\0t\0i\0o\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\01\03\0:\01\09\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0b\0a\0s\0e\0M\0o\0d\0e\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0I\0l\0l\0u\0s\0t\0r\0i\0o\0u\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0,\0\\\0u\00\00\02\02\0d\0e\0n\0o\0i\0s\0e\0\\\0u\00\00\02\02\0:\00\0.\05\05\0,\0\\\0u\00\00\02\02\0w\0o\0r\0k\0f\0l\0o\0w\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0-\0h\0i\0r\0e\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0e\0x\0p\0e\0r\0i\0m\0e\0n\0t\0a\0l\0\\\0u\00\00\02\02\0:\0f\0a\0l\0s\0e\0,\0\\\0u\00\00\02\02\0p\0r\0i\0o\0r\0i\0t\0y\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0o\0u\0r\0c\0e\0I\0m\0a\0g\0e\0\\\0u\00\00\02\02\0:\0{\0\\\0u\00\00\02\02\0u\0r\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\01\02\08\08\06\06\02\01\05\03\09\05\06\09\09\01\07\05\06\08\03\05\08\09\02\09\05\09\04\08\09\04\02\01\05\04\05\02\02\0.\0j\0p\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0}\0,\0\\\0u\00\00\02\02\0d\0i\0s\0a\0b\0l\0e\0P\0o\0i\0\\\0u\00\00\02\02\0:\0t\0r\0u\0e\0,\0\\\0u\00\00\02\02\0p\0r\0o\0c\0e\0s\0s\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0r\0e\0s\0o\0u\0r\0c\0e\0s\0\\\0u\00\00\02\02\0:\0[\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\07\02\06\04\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\00\09\05\07\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\02\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\03\03\01\05\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\06\00\07\04\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\05\05\00\08\04\00\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\06\06\06\09\01\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\08\0}\0]\0}\0\"\0}"
base_model: dhead/wai-nsfw-illustrious-sdxl-v140-sdxl
instance_prompt: lazyneg
---
# Lazy Embeddings
<Gallery />
## Trigger words
You should use `lazyneg` to trigger the image generation.
## Download model
[Download](/Novachrono93/Lazyneg/tree/main) them in the Files & versions tab.
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756796236
|
akirafudo
| 2025-09-02T06:57:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:57:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756796130
|
vendi11
| 2025-09-02T06:56:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:56:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756796118
|
omerbektass
| 2025-09-02T06:55:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:55:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EmilRyd/gpt-oss-20b-aquarat-ground-truth-actually-on-policy-3e5-stylized-1000-80
|
EmilRyd
| 2025-09-02T06:55:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T06:53:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ufal/byt5-large-akces-mate
|
ufal
| 2025-09-02T06:53:37Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"Czech",
"GEC",
"AKCES-GEC dataset",
"text-generation",
"cs",
"arxiv:2506.22402",
"base_model:google/byt5-large",
"base_model:finetune:google/byt5-large",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T13:40:56Z |
---
language: cs
license: cc-by-nc-sa-4.0
tags:
- Czech
- GEC
- AKCES-GEC dataset
pipeline_tag: text-generation
library_name: transformers
base_model: google/byt5-large
---
# Model Card for byt5-large-akces-mate
The `byt5-large-akces-mate` model is a sequence-to-sequence model performing
grammar error correction in Czech described in the paper
[Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402).
It is a finetuned version of [byt5-large](https://huggingface.co/google/byt5-large) using
the MATE method and the [AKCES-GEC dataset](https://hdl.handle.net/11234/1-3057).
## Model Description
- **Developed by:** [Seznam.cz](https://seznam.cz) and [Charles University, MFF, ÚFAL](https://ufal.mff.cuni.cz/)
- **Language(s) (NLP):** Czech
- **Model type:** character-based encoder-decoder Transformer model
- **Finetuned from model:** `google/byt5-large`
- **Finetuned on:**
- first synthetic errors generated by the MATE method (see [the paper](https://arxiv.org/abs/2506.22402))
- then the [AKCES-GEC dataset](https://hdl.handle.net/11234/1-3057)
- **License:** CC BY-NC-SA 4.0
## Model Sources
- **Repository:** https://github.com/ufal/tsd2025-gec
- **Paper:** [Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402)
- **Dataset:** [AKCES-GEC dataset](https://hdl.handle.net/11234/1-3057)
## Evaluation
<div align="center">
<img src="https://github.com/ufal/tsd2025-gec/blob/main/figures/bubble_chart.svg?raw=true" width="75%" alt="Performance bubblechart" />
</div>
| Model | Parameters | GECCC F-0.5 score | AKCES F-0.5 score |
|:------|-----------:|:-----------------:|:-----------------:|
| [byt5-small-geccc-mate](https://hf.co/ufal/byt5-small-geccc-mate) | 300M | 72.56 |
| [byt5-base-geccc-mate](https://hf.co/ufal/byt5-base-geccc-mate) | 582M | 75.15 |
| [byt5-large-geccc-mate](https://hf.co/ufal/byt5-large-geccc-mate) | 1275M | 77.01 |
| [**byt5-large-akces-mate**](https://hf.co/ufal/byt5-large-akces-mate) | **1275M** | | **84.40** |
| [transformer-base-geccc-mate](https://hf.co/ufal/transformer-base-geccc-mate) | 65M | 73.73 |
## Uses
The model can be directly used to process space-tokenized input Czech text and produce grammar-corrected Czech text.
## How to Get Started with the Model
Use the code below to get started with the model. Note that the input must be **space-tokenized**, i.e., every token (using the [UDPipe 1](https://ufal.mff.cuni.cz/udpipe/1) tokenizer [czech-pdt-ud-2.5-191206.udpipe](https://hdl.handle.net/11234/1-3131)) must be space-separated.
```python
tokenizer = transformers.AutoTokenizer.from_pretrained("ufal/byt5-large-akces-mate")
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("ufal/byt5-large-akces-mate")
batch = tokenizer(["Sveřepý šakali zavile vyly na býlí mesýc ."], return_tensors="pt")
outputs = model.generate(batch.input_ids, max_length=256, num_beams=4)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## BibTeX Citation
```
@InProceedings{10.1007/978-3-032-02551-7_7,
author="Pechman, Petr and Straka, Milan and Strakov{\'a}, Jana and N{\'a}plava, Jakub",
editor="Ek{\v{s}}tein, Kamil and Konop{\'i}k, Miloslav and Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and P{\'a}rtl, Franti{\v{s}}ek",
title="Refining Czech GEC: Insights from a Multi-experiment Approach",
booktitle="Text, Speech, and Dialogue",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="64--76",
isbn="978-3-032-02551-7",
doi="10.1007/978-3-032-02551-7_7"
}
```
|
ufal/byt5-large-geccc-mate
|
ufal
| 2025-09-02T06:52:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"Czech",
"GEC",
"GECCC dataset",
"text-generation",
"cs",
"arxiv:2506.22402",
"base_model:google/byt5-large",
"base_model:finetune:google/byt5-large",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T14:26:05Z |
---
language: cs
license: cc-by-nc-sa-4.0
tags:
- Czech
- GEC
- GECCC dataset
pipeline_tag: text-generation
library_name: transformers
base_model: google/byt5-large
---
# Model Card for byt5-large-geccc-mate
The `byt5-large-geccc-mate` model is a sequence-to-sequence model performing
grammar error correction in Czech described in the paper
[Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402).
It is a finetuned version of [byt5-large](https://huggingface.co/google/byt5-large) using
the MATE method and the [GECCC dataset](https://hdl.handle.net/11234/1-4861).
## Model Description
- **Developed by:** [Seznam.cz](https://seznam.cz) and [Charles University, MFF, ÚFAL](https://ufal.mff.cuni.cz/)
- **Language(s) (NLP):** Czech
- **Model type:** character-based encoder-decoder Transformer model
- **Finetuned from model:** `google/byt5-large`
- **Finetuned on:**
- first synthetic errors generated by the MATE method (see [the paper](https://arxiv.org/abs/2506.22402))
- then the [GECCC dataset](https://hdl.handle.net/11234/1-4861)
- **License:** CC BY-NC-SA 4.0
## Model Sources
- **Repository:** https://github.com/ufal/tsd2025-gec
- **Paper:** [Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402)
- **Dataset:** [GECCC dataset](https://hdl.handle.net/11234/1-4861)
## Evaluation
<div align="center">
<img src="https://github.com/ufal/tsd2025-gec/blob/main/figures/bubble_chart.svg?raw=true" width="75%" alt="Performance bubblechart" />
</div>
| Model | Parameters | GECCC F-0.5 score | AKCES F-0.5 score |
|:------|-----------:|:-----------------:|:-----------------:|
| [byt5-small-geccc-mate](https://hf.co/ufal/byt5-small-geccc-mate) | 300M | 72.56 |
| [byt5-base-geccc-mate](https://hf.co/ufal/byt5-base-geccc-mate) | 582M | 75.15 |
| [**byt5-large-geccc-mate**](https://hf.co/ufal/byt5-large-geccc-mate) | **1275M** | **77.01** |
| [byt5-large-akces-mate](https://hf.co/ufal/byt5-large-akces-mate) | 1275M | | 84.40 |
| [transformer-base-geccc-mate](https://hf.co/ufal/transformer-base-geccc-mate) | 65M | 73.73 |
## Uses
The model can be directly used to process space-tokenized input Czech text and produce grammar-corrected Czech text.
## How to Get Started with the Model
Use the code below to get started with the model. Note that the input must be **space-tokenized**, i.e., every token (using the [UDPipe 1](https://ufal.mff.cuni.cz/udpipe/1) tokenizer [czech-pdt-ud-2.5-191206.udpipe](https://hdl.handle.net/11234/1-3131)) must be space-separated.
```python
tokenizer = transformers.AutoTokenizer.from_pretrained("ufal/byt5-large-geccc-mate")
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("ufal/byt5-large-geccc-mate")
batch = tokenizer(["Sveřepý šakali zavile vyly na býlí mesýc ."], return_tensors="pt")
outputs = model.generate(batch.input_ids, max_length=256, num_beams=4)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## BibTeX Citation
```
@InProceedings{10.1007/978-3-032-02551-7_7,
author="Pechman, Petr and Straka, Milan and Strakov{\'a}, Jana and N{\'a}plava, Jakub",
editor="Ek{\v{s}}tein, Kamil and Konop{\'i}k, Miloslav and Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and P{\'a}rtl, Franti{\v{s}}ek",
title="Refining Czech GEC: Insights from a Multi-experiment Approach",
booktitle="Text, Speech, and Dialogue",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="64--76",
isbn="978-3-032-02551-7",
doi="10.1007/978-3-032-02551-7_7"
}
```
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756795892
|
Rudra-madlads
| 2025-09-02T06:52:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:52:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756795828
|
liukevin666
| 2025-09-02T06:51:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LarryAIDraw/klulai_swimsuit
|
LarryAIDraw
| 2025-09-02T06:50:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-02T06:36:35Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1841049/hk416-klukai-orgirls-frontline-2-outfit-2-cerulean-breaker
|
y1y2y3/third_diffusion_reduced
|
y1y2y3
| 2025-09-02T06:50:11Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:y1y2y3/so101_test3",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T06:13:10Z |
---
datasets: y1y2y3/so101_test3
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- lerobot
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
LarryAIDraw/SummerIchika
|
LarryAIDraw
| 2025-09-02T06:50:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-02T06:35:59Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1800130/nakamasa-ichika-summer-blue-archive,还有一个米家vip内容我想看崩铁里的瑕蝶,notion上6月3号那个模好看
|
LarryAIDraw/dimensionalleapspell_v10
|
LarryAIDraw
| 2025-09-02T06:49:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-02T06:34:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1919928/dimensionalleapspell?modelVersionId=2173057
|
tencent/Hunyuan-7B-Pretrain
|
tencent
| 2025-09-02T06:47:36Z | 202 | 11 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-28T06:35:56Z |
---
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-7B-Pretrain"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers.
```SHELL
pip install "transformers>=4.56.0"
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756795612
|
omerbkts
| 2025-09-02T06:47:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:47:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756793366
|
Sonic-man
| 2025-09-02T06:46:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous graceful cow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:46:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous graceful cow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ufal/byt5-base-geccc-mate
|
ufal
| 2025-09-02T06:46:09Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"Czech",
"GEC",
"GECCC dataset",
"text-generation",
"cs",
"arxiv:2506.22402",
"base_model:google/byt5-base",
"base_model:finetune:google/byt5-base",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T14:23:08Z |
---
language: cs
license: cc-by-nc-sa-4.0
tags:
- Czech
- GEC
- GECCC dataset
pipeline_tag: text-generation
library_name: transformers
base_model: google/byt5-base
---
# Model Card for byt5-base-geccc-mate
The `byt5-base-geccc-mate` model is a sequence-to-sequence model performing
grammar error correction in Czech described in the paper
[Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402).
It is a finetuned version of [byt5-base](https://huggingface.co/google/byt5-base) using
the MATE method and the [GECCC dataset](https://hdl.handle.net/11234/1-4861).
## Model Description
- **Developed by:** [Seznam.cz](https://seznam.cz) and [Charles University, MFF, ÚFAL](https://ufal.mff.cuni.cz/)
- **Language(s) (NLP):** Czech
- **Model type:** character-based encoder-decoder Transformer model
- **Finetuned from model:** `google/byt5-base`
- **Finetuned on:**
- first synthetic errors generated by the MATE method (see [the paper](https://arxiv.org/abs/2506.22402))
- then the [GECCC dataset](https://hdl.handle.net/11234/1-4861)
- **License:** CC BY-NC-SA 4.0
## Model Sources
- **Repository:** https://github.com/ufal/tsd2025-gec
- **Paper:** [Refining Czech GEC: Insights from a Multi-Experiment Approach](https://arxiv.org/abs/2506.22402)
- **Dataset:** [GECCC dataset](https://hdl.handle.net/11234/1-4861)
## Evaluation
<div align="center">
<img src="https://github.com/ufal/tsd2025-gec/blob/main/figures/bubble_chart.svg?raw=true" width="75%" alt="Performance bubblechart" />
</div>
| Model | Parameters | GECCC F-0.5 score | AKCES F-0.5 score |
|:------|-----------:|:-----------------:|:-----------------:|
| [byt5-small-geccc-mate](https://hf.co/ufal/byt5-small-geccc-mate) | 300M | 72.56 |
| [**byt5-base-geccc-mate**](https://hf.co/ufal/byt5-base-geccc-mate) | **582M** | **75.15** |
| [byt5-large-geccc-mate](https://hf.co/ufal/byt5-large-geccc-mate) | 1275M | 77.01 |
| [byt5-large-akces-mate](https://hf.co/ufal/byt5-large-akces-mate) | 1275M | | 84.40 |
| [transformer-base-geccc-mate](https://hf.co/ufal/transformer-base-geccc-mate) | 65M | 73.73 |
## Uses
The model can be directly used to process space-tokenized input Czech text and produce grammar-corrected Czech text.
## How to Get Started with the Model
Use the code below to get started with the model. Note that the input must be **space-tokenized**, i.e., every token (using the [UDPipe 1](https://ufal.mff.cuni.cz/udpipe/1) tokenizer [czech-pdt-ud-2.5-191206.udpipe](https://hdl.handle.net/11234/1-3131)) must be space-separated.
```python
tokenizer = transformers.AutoTokenizer.from_pretrained("ufal/byt5-base-geccc-mate")
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("ufal/byt5-base-geccc-mate")
batch = tokenizer(["Sveřepý šakali zavile vyly na býlí mesýc ."], return_tensors="pt")
outputs = model.generate(batch.input_ids, max_length=256, num_beams=4)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## BibTeX Citation
```
@InProceedings{10.1007/978-3-032-02551-7_7,
author="Pechman, Petr and Straka, Milan and Strakov{\'a}, Jana and N{\'a}plava, Jakub",
editor="Ek{\v{s}}tein, Kamil and Konop{\'i}k, Miloslav and Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and P{\'a}rtl, Franti{\v{s}}ek",
title="Refining Czech GEC: Insights from a Multi-experiment Approach",
booktitle="Text, Speech, and Dialogue",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="64--76",
isbn="978-3-032-02551-7",
doi="10.1007/978-3-032-02551-7_7"
}
```
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756795262
|
omerbkts
| 2025-09-02T06:41:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:41:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756795212
|
aXsalll
| 2025-09-02T06:41:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:40:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
meetrathi97/isec_llama3.2_iteration_4
|
meetrathi97
| 2025-09-02T06:41:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T06:39:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vansh-khaneja/Llama-2-7b-chat-finetune
|
vansh-khaneja
| 2025-09-02T06:39:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T06:37:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vennertou/blockassist-bc-pudgy_nimble_bobcat_1756795138
|
vennertou
| 2025-09-02T06:39:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy nimble bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:38:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy nimble bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756795132
|
akirafudo
| 2025-09-02T06:39:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756795021
|
matherchodhuuu
| 2025-09-02T06:38:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:38:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pidbu/blockassist-bc-whistling_alert_shrew_1756794984
|
pidbu
| 2025-09-02T06:37:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:37:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756794624
|
omerbektass
| 2025-09-02T06:30:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:30:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
allenai/MolmoAct-7B-D-Pretrain-0812
|
allenai
| 2025-09-02T06:30:41Z | 790 | 8 |
transformers
|
[
"transformers",
"safetensors",
"molmoact",
"image-text-to-text",
"molmo",
"olmo",
"reasoning",
"vla",
"robotics",
"manipulation",
"custom_code",
"en",
"arxiv:2508.07917",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-09T05:16:59Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B
- google/siglip2-so400m-patch14-384
library_name: transformers
tags:
- molmoact
- molmo
- olmo
- reasoning
- vla
- robotics
- manipulation
paper: 2508.07917
---
<img src="molmoact_logo.svg" alt="MolmoAct Logo" style="width: auto; height: 50px;">
# MolmoAct 7B-D Pretrain
MolmoAct is a fully open-source action reasoning model for robotic manipulation developed by the Allen Institute for AI. MolmoAct is trained on a subset of OXE and MolmoAct Dataset, a dataset with 10k high-quality trajectories of a single-arm Franka robot performing 93 unique manipulation tasks in both home and tabletop environments. It has state-of-the-art performance among vision-language-action models on multiple benchmarks while being fully open-source. You can find all models in the MolmoAct family [here](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7).
**Learn more about MolmoAct** in our announcement [blog post](https://allenai.org/blog/molmoact) or the [paper](https://arxiv.org/abs/2508.07917).
**MolmoAct 7B-D Pretrain** is based on [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) and uses [SigLip2](https://huggingface.co/google/siglip2-so400m-patch14-384) as the vision backbone, which is initialized using Molmo's pre-training approach. It is pre-trained on MolmoAct's [Pre-training Mixture](https://huggingface.co/datasets/allenai/MolmoAct-Pretraining-Mixture). This model is intended to be used for downstream mid-training, or for replicating our zero-shot results on SimplerEnv (Google Robot).
This checkpoint is a **preview** of the MolmoAct release. All artifacts used in creating MolmoAct (data, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
**Update:** Checkpoints are now stored in FP32 (previously BF16). The model was trained in FP32, so publishing FP32 weights aligns with training and enables fine-tuning or continued training directly from this repo. For inference, you can still run BF16 by casting at load, which is what we did for evaluations. See more in the [instructions](#quick-start) below.
Quick links:
- 📂 [All Models](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7)
- 📂 [All Data](https://huggingface.co/collections/allenai/molmoact-data-mixture-6897e583e13b6c2cf3ea2b80)
- 📃 [Paper](https://arxiv.org/abs/2508.07917)
- 💻 [Code](https://github.com/allenai/MolmoAct)
- 🎥 [Blog Post](https://allenai.org/blog/molmoact)
- 🎥 [Video](https://youtu.be/-_wag1X25OE?si=Xi_kUaJTmcQBx1f6)
## Quick Start
To run MolmoAct, first install dependencies:
```bash
pip install einops torchvision accelerate
pip install transformers==4.52
```
Then, follow these steps:
```python
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
from PIL import Image
import requests
from io import BytesIO
ckpt = "allenai/MolmoAct-7B-D-Pretrain-0812"
# load the processor
processor = AutoProcessor.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
padding_side="left",
)
# load the model
model = AutoModelForImageTextToText.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
)
# task instruction
instruction = "pick orange can"
# strictly follow this reasoning prompt
prompt = (
f"The task is {instruction}. "
"What is the action that the robot should take. "
f"To figure out the action that the robot should take to {instruction}, "
"let's think through it step by step. "
"First, what is the depth map for this image? "
"Second, what is the trajectory of the end effector? "
"Based on the depth map of the image and the trajectory of the end effector, "
"what is the action that the robot should take?"
)
# apply chat template
text = processor.apply_chat_template(
[
{
"role": "user",
"content": [dict(type="text", text=prompt)]
}
],
tokenize=False,
add_generation_prompt=True,
)
# image observation
url = "https://huggingface.co/allenai/MolmoAct-7B-D-Pretrain-0812/resolve/main/example.png"
r = requests.get(url, headers={"User-Agent": "python-requests"}, timeout=30)
r.raise_for_status()
img = Image.open(BytesIO(r.content)).convert("RGB")
imgs = [img]
# process the image and text
inputs = processor(
images=[imgs],
text=text,
padding=True,
return_tensors="pt",
)
# move inputs to the correct device
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# generate output
with torch.inference_mode():
with torch.autocast("cuda", enabled=True, dtype=torch.bfloat16):
generated_ids = model.generate(**inputs, max_new_tokens=256)
# only get generated tokens; decode them to text
generated_tokens = generated_ids[:, inputs['input_ids'].size(1):]
generated_text = processor.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# print the generated text
print(f"generated text: {generated_text}")
# >>> The depth map of the image is ... The trajectory of the end effector is ...
# Based on these information, the action that the robot should take is ...
# parse out all depth perception tokens
depth = model.parse_depth(generated_text)
print(f"generated depth perception tokens: {depth}")
# >>> [ "<DEPTH_START><DEPTH_1><DEPTH_2>...<DEPTH_END>" ]
# parse out all visual reasoning traces
trace = model.parse_trace(generated_text)
print(f"generated visual reasoning trace: {trace}")
# >>> [ [[242, 115], [140, 77], [94, 58], [140, 44], [153, 26]]] ]
# parse out all actions, unnormalizing with key of fractal20220817_data
action = model.parse_action(generated_text, unnorm_key="fractal20220817_data")
print(f"generated action: {action}")
# >>> [ [0.0732076061122558, 0.08228153779226191, -0.027760173818644346,
# 0.15932856272248652, -0.09686601126895233, 0.043916773912953344,
# 0.996078431372549] ]
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
## Model and Hardware Safety
MolmoAct offers the ability to inspect a visual trace of its intended actions in space before they occur, allowing users to ensure safe behavior by proactively auditing and adjusting the actions of any hardware acting under the model’s instructions. MolmoAct’s action space is bounded within the data provided, and compliance is built into the model to prevent excessive force when resistance is detected. Please follow the hardware manufacturer’s guidelines when using this model with a robot and perform all operations in a safely configured environment.
## Citation
```bibtex
@misc{molmoact2025,
title={MolmoAct: Action Reasoning Models that can Reason in Space},
author={Jason Lee and Jiafei Duan and Haoquan Fang and Yuquan Deng and Shuo Liu and Boyang Li and Bohan Fang and Jieyu Zhang and Yi Ru Wang and Sangho Lee and Winson Han and Wilbert Pumacay and Angelica Wu and Rose Hendrix and Karen Farley and Eli VanderBilt and Ali Farhadi and Dieter Fox and Ranjay Krishna},
year={2025},
eprint={2508.07917},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2508.07917}
}
```
|
allenai/MolmoAct-7B-O-0812
|
allenai
| 2025-09-02T06:30:23Z | 155 | 5 |
transformers
|
[
"transformers",
"safetensors",
"molmoact",
"image-text-to-text",
"molmo",
"olmo",
"reasoning",
"vla",
"robotics",
"manipulation",
"custom_code",
"en",
"arxiv:2508.07917",
"base_model:allenai/OLMo-2-1124-7B",
"base_model:finetune:allenai/OLMo-2-1124-7B",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T03:58:37Z |
---
base_model:
- allenai/OLMo-2-1124-7B
- openai/clip-vit-large-patch14-336
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: robotics
tags:
- molmoact
- molmo
- olmo
- reasoning
- vla
- robotics
- manipulation
paper: 2508.07917
---
<img src="molmoact_logo.svg" alt="MolmoAct Logo" style="width: auto; height: 50px;">
# MolmoAct 7B-O
MolmoAct is a fully open-source action reasoning model for robotic manipulation developed by the Allen Institute for AI. MolmoAct is trained on a subset of OXE and MolmoAct Dataset, a dataset with 10k high-quality trajectories of a single-arm Franka robot performing 93 unique manipulation tasks in both home and tabletop environments. It has state-of-the-art performance among vision-language-action models on multiple benchmarks while being fully open-source. You can find all models in the MolmoAct family [here](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7).
**Learn more about MolmoAct** in our announcement [blog post](https://allenai.org/blog/molmoact) or the [paper](https://arxiv.org/abs/2508.07917).
**MolmoAct 7B-O** is based on [OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo-2-1124-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as the vision backbone, which is initialized using Molmo's pre-training approach. It is first pre-trained on MolmoAct's [Pre-training Mixture](https://huggingface.co/datasets/allenai/MolmoAct-Pretraining-Mixture), and then mid-trained on the [MolmoAct Dataset](https://huggingface.co/datasets/allenai/MolmoAct-Midtraining-Mixture). This model is intended to be used for downstream post-training.
This checkpoint is a **preview** of the MolmoAct release. All artifacts used in creating MolmoAct (data, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
**Update:** Checkpoints are now stored in FP32 (previously BF16). The model was trained in FP32, so publishing FP32 weights aligns with training and enables fine-tuning or continued training directly from this repo. For inference, you can still run BF16 by casting at load, which is what we did for evaluations. See more in the [instructions](#quick-start) below.
Quick links:
- 📂 [All Models](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7)
- 📂 [All Data](https://huggingface.co/collections/allenai/molmoact-data-mixture-6897e583e13b6c2cf3ea2b80)
- 📃 [Paper](https://arxiv.org/abs/2508.07917)
- 💻 [Code](https://github.com/allenai/MolmoAct)
- 🎥 [Blog Post](https://allenai.org/blog/molmoact)
- 🎥 [Video](https://youtu.be/-_wag1X25OE?si=Xi_kUaJTmcQBx1f6)
## Quick Start
To run MolmoAct, first install dependencies:
```bash
pip install einops torchvision accelerate
pip install transformers==4.52
```
Then, follow these steps:
```python
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
from PIL import Image
import requests
from io import BytesIO
ckpt = "allenai/MolmoAct-7B-O-0812"
# load the processor
processor = AutoProcessor.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
padding_side="left",
)
# load the model
model = AutoModelForImageTextToText.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
)
# task instruction
instruction = "close the box"
# strictly follow this reasoning prompt
prompt = (
f"The task is {instruction}. "
"What is the action that the robot should take. "
f"To figure out the action that the robot should take to {instruction}, "
"let's think through it step by step. "
"First, what is the depth map for the first image? "
"Second, what is the trajectory of the end effector in the first image? "
"Based on the depth map of the first image and the trajectory of the end effector in the first image, "
"along with other images from different camera views as additional information, "
"what is the action that the robot should take?"
)
# apply chat template
text = processor.apply_chat_template(
[
{
"role": "user",
"content": [dict(type="text", text=prompt)]
}
],
tokenize=False,
add_generation_prompt=True,
)
# image observation (side + wrist)
url1 = "https://huggingface.co/allenai/MolmoAct-7B-D-0812/resolve/main/example_1.png"
url2 = "https://huggingface.co/allenai/MolmoAct-7B-D-0812/resolve/main/example_2.png"
r1 = requests.get(url1, headers={"User-Agent": "python-requests"}, timeout=30)
r1.raise_for_status()
r2 = requests.get(url2, headers={"User-Agent": "python-requests"}, timeout=30)
r2.raise_for_status()
img1 = Image.open(BytesIO(r1.content)).convert("RGB")
img2 = Image.open(BytesIO(r2.content)).convert("RGB")
imgs = [img1, img2]
# process the image and text
inputs = processor(
images=[imgs],
text=text,
padding=True,
return_tensors="pt",
)
# move inputs to the correct device
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# generate output
with torch.inference_mode():
with torch.autocast("cuda", enabled=True, dtype=torch.bfloat16):
generated_ids = model.generate(**inputs, max_new_tokens=256)
# only get generated tokens; decode them to text
generated_tokens = generated_ids[:, inputs['input_ids'].size(1):]
generated_text = processor.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# print the generated text
print(f"generated text: {generated_text}")
# >>> The depth map of the first image is ... The trajectory of the end effector in the first image is ...
# Based on these information, along with other images from different camera views as additional information,
# the action that the robot should take is ...
# parse out all depth perception tokens
depth = model.parse_depth(generated_text)
print(f"generated depth perception tokens: {depth}")
# >>> [ "<DEPTH_START><DEPTH_1><DEPTH_2>...<DEPTH_END>" ]
# parse out all visual reasoning traces
trace = model.parse_trace(generated_text)
print(f"generated visual reasoning trace: {trace}")
# >>> [ [[242, 115], [140, 77], [94, 58], [140, 44], [153, 26]]] ]
# parse out all actions, unnormalizing with key of "molmoact"
action = model.parse_action(generated_text, unnorm_key="molmoact")
print(f"generated action: {action}")
# >>> [ [0.0732076061122558, 0.08228153779226191, -0.027760173818644346,
# 0.15932856272248652, -0.09686601126895233, 0.043916773912953344,
# 0.996078431372549] ]
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
## Model and Hardware Safety
MolmoAct offers the ability to inspect a visual trace of its intended actions in space before they occur, allowing users to ensure safe behavior by proactively auditing and adjusting the actions of any hardware acting under the model’s instructions. MolmoAct’s action space is bounded within the data provided, and compliance is built into the model to prevent excessive force when resistance is detected. Please follow the hardware manufacturer’s guidelines when using this model with a robot and perform all operations in a safely configured environment.
## Citation
```bibtex
@misc{molmoact2025,
title={MolmoAct: Action Reasoning Models that can Reason in Space},
author={Jason Lee and Jiafei Duan and Haoquan Fang and Yuquan Deng and Shuo Liu and Boyang Li and Bohan Fang and Jieyu Zhang and Yi Ru Wang and Sangho Lee and Winson Han and Wilbert Pumacay and Angelica Wu and Rose Hendrix and Karen Farley and Eli VanderBilt and Ali Farhadi and Dieter Fox and Ranjay Krishna},
year={2025},
eprint={2508.07917},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2508.07917}
}
```
|
vendi11/blockassist-bc-placid_placid_llama_1756794402
|
vendi11
| 2025-09-02T06:27:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:27:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
duohuang/Affine-copycat-2
|
duohuang
| 2025-09-02T06:26:06Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-09-02T06:23:06Z |
# Affine
Mine open reasoning.
[Affine Discord](https://discord.com/invite/3T9X4Yn23e)
## Introduction
Affine is an incentivized RL environment which pays miners which make incremental improvements on a set of tasks (for instance, program abduction or coding). The mechanism is sybil-proof (you can't cheat by deploying multiple miners), decoy-proof (you can't cheat by packing models into certain environments), copy-proof (you can't cheat by stealing models), overfitting-proof (you can't cheat by overfitting to a single env).
How does Affine work? Affine validators incentivize miners to submit models to Subnet 64 on Bittensor (a.k.a Chutes) where they are inference load balanced and publicly available. These models are evaluated on a set of RL-environments with validators looking for the model which dominates the pareto frontier -- namely the model which outcompetes all other models on all envs (see `af validator`) The network is winners-take-all where miners are forced to copy, download and improve the pareto frontier model.
Why affine? Directed incentives for RL have never been achieved. The ability to direct intelligence and aggregate the work-effort of a large non-permissioned group of individuals on RL tasks will unlock fast advancement in intelligence, we intend to commoditize reasoning (intelligence's highest form) and break the intelligence sound barrier.
## Installation
```bash
# Install uv Astral
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and install Affine
git clone https://github.com/AffineFoundation/affine.git
cd affine
uv venv && source .venv/bin/activate && uv pip install -e .
# Verify installation
af
```
## Validating
Set env vars, chutes api key.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
(Recommended): Run the validator with docker and watchtower autoupdate.
```bash
# Run the validator with watchtower.
docker-compose down && docker-compose pull && docker-compose up -d && docker-compose logs -f
```
Run the validator using the local override (build local image) + base compose
```bash
docker compose -f docker-compose.yml -f docker-compose.local.yml down --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml logs -f
```
Run the validator locally
```bash
# Start the validator with debug.
af -vv validate
```
# Mining
IMPORTANT: you require a ***developer enabled account*** on Chutes to mine. Normal API keys cannot deploy chutes right now.
1. Set env vars.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
2. Miners need a chutes developer account ( `chutes.ai` )
```bash
chutes register
```
3. Register your miner to Affine (S120).
```bash
btcli subnet register --wallet.name <your cold> --wallet.hotkey <your hot>
```
4. Pull a model off the network.
```bash
af -vvv pull <uid to pull> --model_path <i.e. ./my_model>
```
5. Improve the model
```bash
... magic RL stuff ...
```
6. Push the model to your miner.
```bash
af -vvv push --coldkey <your cold> --hotkey <your hot> --model_path <i.e. ./my_model>
```
# SDK
Affine is also an SDK you can use to generate and evaluate models envs.
```python
import affine as af
# Optionally turn on logging
af.trace(); af.debug(); af.info()
# Get all miner info or only for UID =5
miners = await af.get_miners()
miner = await af.get_miners( 5 )
# Generate a SAT challenge
chal = await af.SAT.generate()
# Generate a bunch.
chals = await af.ABDUCTION().many( 10 )
chals = await af.DEDUCTION().many( 10 )
# Query the model directly.
# NOTE: A CHUTES_API_KEY .env value is required for this command.
response = await af.query( chal.prompt, model = miner.model )
# Evaluate the response
evaluation = chal.evaluate( response )
print( evaluation.score )
# Async generator of results from last 100 blocks.
async for res in af.rollouts(100):
print (res) # Result objects
```
|
svjack/Lauma_wan_2_2_14B_lora
|
svjack
| 2025-09-02T06:25:18Z | 2 | 0 | null |
[
"region:us"
] | null | 2025-08-31T07:31:35Z |
# LoRA Model Card: `svjack/Lauma_wan_2_2_14B_lora`
## **Ethereal Forest Guardian Synthesis**
**Base Model**: `Wan2.2_T2V_A14B`
**Fine-tuned Adapters**:
- `Lauma_wan_2_2_14B_lora_000002250_high_noise.safetensors` (weight: 1.0)
- `Lauma_wan_2_2_14B_lora_000002250_low_noise.safetensors` (weight: 1.0)
**Key Strengths**:
- **Natural Elegance**: Captures graceful, organic movements and a deep connection to natural environments.
- **Enchanted Aesthetics**: Excellently renders the contrast between earthy tones (brown/white antlers) and vibrant natural backgrounds.
- **Detailed Attributes**: Intricate rendering of natural elements like antlers, flowing hair, and organic textures (petals, water, fire).
- **Emotional Serenity**: Conveys a spectrum of tranquil emotions, from gentle curiosity to peaceful contemplation and joyful freedom.
---
## **Optimized Example Prompts**
### **Example 1: Morning Harvest**
**Prompt**:
```bash
一个长着鹿角的游戏女角色,
清晨的第一缕阳光穿透茂密的森林,化作无数道柔和的光柱,洒在沾满露珠的草地上。她赤着双足,
轻盈地行走其间,那对优雅的棕白色鹿角仿佛天然的冠冕,与森林的气息浑然一体。她微微俯身,
纤细的手指小心翼翼地掐下一朵淡蓝色的野花,专注的神情温柔而宁静。阳光为她轮廓镀上一层金边,
细微的尘埃在她发丝旁飞舞,仿佛精灵在伴她同行。整个画面充满了静谧而充满生机的自然之美。
```
**Adapters**:
- `Lauma_wan_2_2_14B_lora_000002250_high_noise.safetensors` (1.0)
- `Lauma_wan_2_2_14B_lora_000002250_low_noise.safetensors` (1.0)
**Key Features**:
- Soft morning light rays and dew sparkle effects
- Delicate finger movements and focused expression
- Harmonious color palette of earthy tones and soft blues
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/G7l4XR3GAQSExtcoSq_J6.mp4"></video>
---
### **Example 2: Riverside Reading**
**Prompt**:
```bash
一个长着鹿角的游戏女角色,
午后的林间溪流旁,潺潺水声如同自然的白噪音。她选择了一块平坦的青石坐下,双膝并拢,
一本厚重的古籍摊在裙摆之上。她微微低头,长发如瀑般垂落,那双独特的鹿角在水中投下清晰而优美的倒影,
与远处云雾缭绕的山峦倒影巧妙地交织在一起,构成一幅绝妙的对称画面。她的指尖轻轻拂过书页,
神情专注而安详,仿佛整个世界的喧嚣都被这溪水洗净,只剩下她与书中世界的低声对话。
```
**Adapters**:
- `Lauma_wan_2_2_14B_lora_000002250_high_noise.safetensors` (1.0)
- `Lauma_wan_2_2_14B_lora_000002250_low_noise.safetensors` (1.0)
**Key Features**:
- Perfect water reflections creating symmetry
- Dynamic interaction between hair, antlers, and environment
- Peaceful atmosphere with natural sound visual
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/5HjULP29gm6Wjnpd92Pt3.mp4"></video>
---
### **Example 3: Bonfire Under the Stars**
**Prompt**:
```bash
一个长着鹿角的游戏女角色,
夜幕如天鹅绒般铺满天际,繁星点点。林间空地上,一小堆篝火噼啪作响,跳动的火焰是黑暗中温暖的光源。
她蜷腿坐在铺着毛毡的地上,双手抱膝,那对鹿角在火光的映照下勾勒出深邃的轮廓。闪烁的火星如同金色的萤火虫,
围绕着她轻盈飞舞。她凝视着火焰,眼眸中跳动着温暖的光点,嘴角带着一丝恬淡的微笑。
清冷的夜色与温暖的篝火在她身边达成完美的平衡,画面既神秘又充满慰藉感。
```
**Adapters**:
- `Lauma_wan_2_2_14B_lora_000002250_high_noise.safetensors` (1.0)
- `Lauma_wan_2_2_14B_lora_000002250_low_noise.safetensors` (1.0)
**Key Features**:
- Dramatic contrast between warm firelight and cool night
- Ember and spark particle effects
- Expressive facial features enhanced by flickering light
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/4k5AHDzQJsCdzdn-Aq8xQ.mp4"></video>
---
### **Example 4: Swing in the Petal Shower**
**Prompt**:
```bash
一个长着鹿角的游戏女角色,
一棵古老繁花的大树下,藤蔓与鲜花编织的秋千轻轻摇曳。她站在秋千上,双手握住藤蔓,用力一蹬,便轻盈地荡向高处。
在她荡起的瞬间,树上的花瓣被惊扰,纷纷扬扬如粉色的雪花般飘落。她的裙摆和发丝在风中飞扬,
笑容灿烂而自由,充满了无忧无虑的快乐。鹿角仿佛也融入了这欢快的节奏,
与飘舞的花瓣共舞。整个画面动态十足,充满了梦幻般的浪漫与活力。
```
**Adapters**:
- `Lauma_wan_2_2_14B_lora_000002250_high_noise.safetensors` (1.0)
- `Lauma_wan_2_2_14B_lora_000002250_low_noise.safetensors` (1.0)
**Key Features**:
- Dynamic motion blur and wind effects on hair and dress
- Volumetric petal fall simulation
- Euphoric expression and body language
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/J7n9Q0qOkdHZp1hGjmsRT.mp4"></video>
---
## **Technical Parameters**
| Setting | Recommendation | Notes |
|------------------|--------------------|----------------------------------------|
| **CFG Scale** | 1 (Fixed) | Wan2.2 architecture requirement |
| **Sampler** | uni_pc | Optimal for natural element dynamics |
| **Steps** | 4 | Balances detail and speed |
| **Resolution** | 1280x768 | Maintains VRAM efficiency |
| **Motion Factor**| 2-4 | Subtle for gestures, higher for swipes |
---
## **Performance Profile**
- **VRAM Usage**: ~23GB at 1280x768 (RTX 4090)
- **Render Speed**: 35-50 sec/frame
- **Troubleshooting**:
- **Overexposed Natural Elements**: Add `overbright petals, glare` to negative prompts
- **Stiff Gestures**: Increase motion factor + `flowing_movement` token
- **Color Bleed**: Apply `vibrance_filter` node at 0.2 strength
## **License**
CC-BY-NC-SA 4.0 (Non-commercial, share-alike)
**Community Hub**: https://huggingface.co/svjack/Lauma_wan_2_2_14B_lora/discussions
---
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756794263
|
omerbektass
| 2025-09-02T06:24:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:24:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
desibond/blockassist-bc-thriving_mighty_finch_1756792123
|
desibond
| 2025-09-02T06:24:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving mighty finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:24:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving mighty finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SenseLLM/StructureCoder-3B
|
SenseLLM
| 2025-09-02T06:22:32Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2508.19532",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T02:33:12Z |
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
## Alignment with Fill-In-the-Middle for Enhancing Code Generation
<p align="center">
<a href="https://arxiv.org/abs/2508.19532">📄 Paper</a> •
<a href="https://github.com/SenseLLM/StructureCoder">🏠 Repo</a> •
<a href="https://huggingface.co/SenseLLM/StructureCoder-7B">🤖 Models</a>
</p>
## Introduction
Structure splits code snippets into smaller, granular blocks, creatingmore diverse DPO pairs from the same testcases. Additionally, we introduce the Abstract Syntax Tree (AST) splitting and curriculum training method to enhance the DPO training. Please refer to our paper for more details!

<hr>
## Models
| Model | Checkpoint | Size |
|:--------------------|:------------------------------------------------------------------|:-----|
| StructureCoder-1.5B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-1.5B) | 1.5B |
| StructureCoder-3B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-3B) | 3B |
| StructureCoder-7B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-7B) | 7B |
## Acknowledgments
We thank the following amazing projects that truly inspired us:
- [Qwen-Coder](https://github.com/QwenLM/Qwen3-Coder)
- [APPS](https://github.com/hendrycks/apps)
- [EvalPlus](https://github.com/evalplus/evalplus)
- [LiveCodeBench](https://github.com/LiveCodeBench/LiveCodeBench)
- [BigCodeBench](https://github.com/bigcode-project/bigcodebench)
|
SenseLLM/StructureCoder-1.5B
|
SenseLLM
| 2025-09-02T06:21:53Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2508.19532",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T01:52:03Z |
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
## Alignment with Fill-In-the-Middle for Enhancing Code Generation
<p align="center">
<a href="https://arxiv.org/abs/2508.19532">📄 Paper</a> •
<a href="https://github.com/SenseLLM/StructureCoder">🏠 Repo</a> •
<a href="https://huggingface.co/SenseLLM/StructureCoder-7B">🤖 Models</a>
</p>
## Introduction
Structure splits code snippets into smaller, granular blocks, creatingmore diverse DPO pairs from the same testcases. Additionally, we introduce the Abstract Syntax Tree (AST) splitting and curriculum training method to enhance the DPO training. Please refer to our paper for more details!

<hr>
## Models
| Model | Checkpoint | Size |
|:--------------------|:------------------------------------------------------------------|:-----|
| StructureCoder-1.5B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-1.5B) | 1.5B |
| StructureCoder-3B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-3B) | 3B |
| StructureCoder-7B | 🤗 [HF Link](https://huggingface.co/SenseLLM/StructureCoder-7B) | 7B |
## Acknowledgments
We thank the following amazing projects that truly inspired us:
- [Qwen-Coder](https://github.com/QwenLM/Qwen3-Coder)
- [APPS](https://github.com/hendrycks/apps)
- [EvalPlus](https://github.com/evalplus/evalplus)
- [LiveCodeBench](https://github.com/LiveCodeBench/LiveCodeBench)
- [BigCodeBench](https://github.com/bigcode-project/bigcodebench)
|
Official-Sabrina-Carpenter-New-Music/videos.Sabrina.Carpenter.New.Music.Video.Viral.Official.Tutorial
|
Official-Sabrina-Carpenter-New-Music
| 2025-09-02T06:20:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T06:20:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756793893
|
liukevin666
| 2025-09-02T06:19:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:19:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756793742
|
klmdr22
| 2025-09-02T06:16:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:16:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756793589
|
matherchodhuuu
| 2025-09-02T06:14:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:14:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756793617
|
amandacute
| 2025-09-02T06:14:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:13:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756793535
|
aXsalll
| 2025-09-02T06:13:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:12:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yadav908ankit/blockassist-bc-deft_wily_armadillo_1756793525
|
yadav908ankit
| 2025-09-02T06:13:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft wily armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:12:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft wily armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hyunjoonkang/mirror_pick_and_place_davla
|
hyunjoonkang
| 2025-09-02T06:11:40Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:hyunjoonkang/wx250s_mirror_pick_and_place_blue",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T06:11:14Z |
---
base_model: lerobot/smolvla_base
datasets: hyunjoonkang/wx250s_mirror_pick_and_place_blue
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756791933
|
vwzyrraz7l
| 2025-09-02T06:10:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:10:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/anonymizer-1.7B-SFTonly-GGUF
|
mradermacher
| 2025-09-02T06:10:38Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"anonymization",
"privacy",
"tool-calling",
"qwen",
"en",
"base_model:eternisai/anonymizer-1.7B-SFTonly",
"base_model:quantized:eternisai/anonymizer-1.7B-SFTonly",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-02T05:39:55Z |
---
base_model: eternisai/anonymizer-1.7B-SFTonly
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation
- anonymization
- privacy
- tool-calling
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/eternisai/anonymizer-1.7B-SFTonly
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#anonymizer-1.7B-SFTonly-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/anonymizer-1.7B-SFTonly-GGUF/resolve/main/anonymizer-1.7B-SFTonly.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
klmdr22/blockassist-bc-wild_loud_newt_1756793381
|
klmdr22
| 2025-09-02T06:10:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:10:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756793247
|
liukevin666
| 2025-09-02T06:08:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:08:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756793096
|
matherchodhuuu
| 2025-09-02T06:06:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:06:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-granite/granite-embedding-small-english-r2
|
ibm-granite
| 2025-09-02T06:05:43Z | 3,940 | 21 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"modernbert",
"feature-extraction",
"granite",
"embeddings",
"transformers",
"mteb",
"sentence-similarity",
"en",
"arxiv:2508.21085",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-07-17T20:41:53Z |
---
license: apache-2.0
language:
- en
pipeline_tag: sentence-similarity
library_name: sentence-transformers
tags:
- granite
- embeddings
- transformers
- mteb
---
# Granite-Embedding-Small-English-R2
<!-- Provide a quick summary of what the model is/does. -->
**Model Summary:** Granite-embedding-small-english-r2 is a 47M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 384 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
The r2 models show strong performance across standard and IBM-built information retrieval benchmarks (BEIR, ClapNQ),
code retrieval (COIR), long-document search benchmarks (MLDR, LongEmbed), conversational multi-turn (MTRAG),
table retrieval (NQTables, OTT-QA, AIT-QA, MultiHierTT, OpenWikiTables), and on many enterprise use cases.
These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, granite-embedding-small-english-r2 is optimized to ensure strong alignment between query and passage embeddings.
The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
- _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
- **_granite-embedding-small-english-r2_** (**47M** parameters): A _first-of-its-kind_ reduced-size model, with 8192 context length support, fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
## Model Details
- **Developed by:** Granite Embedding Team, IBM
- **Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
- **Paper:** [Granite Embedding R2 Models](https://arxiv.org/abs/2508.21085)
- **Language(s):** English
- **Release Date**: Aug 15, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Usage
**Intended Use:** The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
For efficient decoding, these models use Flash Attention 2. Installing it is optional, but can lead to faster inference.
```shell
pip install flash_attn==2.6.1
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Usage with Sentence Transformers:**
The model is compatible with SentenceTransformer library and is very easy to use:
First, install the sentence transformers library
```shell
pip install sentence_transformers
```
The model can then be used to encode pairs of text and find the similarity between their representations
```python
from sentence_transformers import SentenceTransformer, util
model_path = "ibm-granite/granite-embedding-small-english-r2"
# Load the Sentence Transformer model
model = SentenceTransformer(model_path)
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
input_passages = [
"Achy Breaky Heart is a country song written by Don Von Tress. Originally titled Don't Tell My Heart and performed by The Marcy Brothers in 1991. ",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
# encode queries and passages. The model produces unnormalized vectors. If your task requires normalized embeddings pass normalize_embeddings=True to encode as below.
query_embeddings = model.encode(input_queries)
passage_embeddings = model.encode(input_passages)
# calculate cosine similarity
print(util.cos_sim(query_embeddings, passage_embeddings))
```
**Usage with Huggingface Transformers:**
This is a simple example of how to use the granite-embedding-small-english-r2 model with the Transformers library and PyTorch.
First, install the required libraries
```shell
pip install transformers torch
```
The model can then be used to encode pairs of text
```python
import torch
from transformers import AutoModel, AutoTokenizer
model_path = "ibm-granite/granite-embedding-small-english-r2"
# Load the model and tokenizer
model = AutoModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
# tokenize inputs
tokenized_queries = tokenizer(input_queries, padding=True, truncation=True, return_tensors='pt')
# encode queries
with torch.no_grad():
# Queries
model_output = model(**tokenized_queries)
# Perform pooling. granite-embedding-278m-multilingual uses CLS Pooling
query_embeddings = model_output[0][:, 0]
# normalize the embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
```
## Evaluation Results
Granite embedding r2 models show a strong performance across tasks diverse tasks.
Performance of the granite models on MTEB Retrieval (i.e., BEIR), MTEB-v2, code retrieval (CoIR), long-document search benchmarks (MLDR, LongEmbed), conversational multi-turn (MTRAG),
table retrieval (NQTables, OTT-QA, AIT-QA, MultiHierTT, OpenWikiTables), benchmarks is reported in the below tables.
The average speed to encode documents on a single H100 GPU using a sliding window with 512 context length chunks is also reported.
Nearing encoding speed of 200 documents per second granite-embedding-small-english-r2 demonstrates speed and efficiency, while mainintaining competitive performance.
| Model | Parameters (M) | Embedding Size | BEIR Retrieval (15) | MTEB-v2 (41)| CoIR (10) | MLDR (En) | MTRAG (4) | Encoding Speed (dosc/sec) |
|------------------------------------|:--------------:|:--------------:|:-------------------:|:-----------:|:---------:|:---------:|:---------:|:-------------------------------:|
| granite-embedding-125m-english | 125 | 768 | 52.3 | 62.1 | 50.3 | 35.0 | 49.4 | 149 |
| granite-embedding-30m-english | 30 | 384 | 49.1 | 60.2 | 47.0 | 32.6 | 48.6 | 198 |
| granite-embedding-english-r2 | 149 | 768 | 53.1 | 62.8 | 55.3 | 40.7 | 56.7 | 144 |
| granite-embedding-small-english-r2 | 47 | 384 | 50.9 | 61.1 | 53.8 | 39.8 | 48.1 | 199 |
|Model | Parameters (M)| Embedding Size|**AVERAGE**|MTEB-v2 Retrieval (10)| CoIR (10)| MLDR (En)| LongEmbed (6)| Table IR (5)| MTRAG (4) | Encoding Speed (docs/sec)|
|-----------------------------------|:-------------:|:-------------:|:---------:|:--------------------:|:--------:|:--------:|:------------:|:-----------:|:--------:|-----------:|
|e5-small-v2 |33|384|45.39|48.5|47.1|29.9|40.7|72.31|33.8| 138|
|bge-small-en-v1.5 |33|384|45.22|53.9|45.8|31.4|32.1|69.91|38.2| 138|
|||||||||||
|granite-embedding-english-r2 |149|768|59.5|56.4|54.8|41.6|67.8|78.53|57.6| 144|
|granite-embedding-small-english-r2 | 47|384|55.6|53.9|53.4|40.1|61.9|75.51|48.9| 199|
### Model Architecture and Key Features
The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
- _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
- _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
The following table shows the structure of the two models:
| Model | **granite-embedding-small-english-r2** | granite-embedding-english-r2 |
| :--------- | :-------:|:--------:|
| Embedding size | **384** | 768 |
| Number of layers | **12** | 22 |
| Number of attention heads | **12** | 12 |
| Intermediate size | **1536** | 1152 |
| Activation Function | **GeGLU** | GeGLU |
| Vocabulary Size | **50368** | 50368 |
| Max. Sequence Length | **8192** | 8192 |
| # Parameters | **47M** | 149M |
### Training and Optimization
The granite embedding r2 models incorporate key enhancements from the ModernBERT architecture, including:
- Alternating attention lengths to accelerate processing
- Rotary position embeddings for extended sequence length
- A newly trained tokenizer optimized with code and text data
- Flash Attention 2.0 for improved efficiency
- Streamlined parameters, eliminating unnecessary bias terms
## Data Collection
Granite embedding r2 models are trained using data from four key sources:
1. Unsupervised title-body paired data scraped from the web
2. Publicly available paired with permissive, enterprise-friendly license
3. IBM-internal paired data targetting specific technical domains
4. IBM-generated synthetic data
Notably, we _do not use_ the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license (many open-source models use this dataset due to its high quality).
The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
## Infrastructure
We trained the granite embedding english r2 models using IBM's computing cluster, BlueVela Cluster, which is outfitted with NVIDIA H100 80GB GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
## Ethical Considerations and Limitations
Granite-embedding-small-english-r2 leverages both permissively licensed open-source and select proprietary data for enhanced performance. The training data for the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-embedding-small-english-r2 is trained only for English texts, and has a context length of 8192 tokens (longer texts will be truncated to this size).
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
## Citation
```
@misc{awasthy2025graniteembeddingr2models,
title={Granite Embedding R2 Models},
author={Parul Awasthy and Aashka Trivedi and Yulong Li and Meet Doshi and Riyaz Bhat and Vignesh P and Vishwajeet Kumar and Yushu Yang and Bhavani Iyer and Abraham Daniels and Rudra Murthy and Ken Barker and Martin Franz and Madison Lee and Todd Ward and Salim Roukos and David Cox and Luis Lastras and Jaydeep Sen and Radu Florian},
year={2025},
eprint={2508.21085},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.21085},
}
```
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756792946
|
omerbektass
| 2025-09-02T06:02:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:02:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756792708
|
akirafudo
| 2025-09-02T05:58:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:58:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dhanyabahadur/interior-design-clip-vision-ip-adapter
|
dhanyabahadur
| 2025-09-02T05:53:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T13:35:22Z |
---
license: apache-2.0
---
|
billelkhr/deberta-v3-sentiment-review-movie
|
billelkhr
| 2025-09-02T05:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"dataset:ajaykarthick/imdb-movie-reviews",
"arxiv:1910.09700",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T05:22:03Z |
---
library_name: transformers
datasets:
- ajaykarthick/imdb-movie-reviews
metrics:
- accuracy
- f1
base_model:
- microsoft/deberta-v3-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [billel khiri]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
95% accuracy
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
david3621/blockassist-bc-gentle_meek_cat_1756790886
|
david3621
| 2025-09-02T05:49:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:43:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756789998
|
acidjp
| 2025-09-02T05:46:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:46:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756791804
|
amandacute
| 2025-09-02T05:45:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:44:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sesamehowie/Qwen3-0.6B-Gensyn-Swarm-grunting_twitchy_tarantula
|
sesamehowie
| 2025-09-02T05:43:58Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grunting_twitchy_tarantula",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T14:10:36Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am grunting_twitchy_tarantula
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756791644
|
akirafudo
| 2025-09-02T05:41:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:41:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756790077
|
vwzyrraz7l
| 2025-09-02T05:39:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:39:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756791505
|
omerbektass
| 2025-09-02T05:38:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:38:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.