modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
akirafudo/blockassist-bc-keen_fast_giraffe_1756790469
|
akirafudo
| 2025-09-02T05:21:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:21:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756790330
|
aXsalll
| 2025-09-02T05:19:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:19:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756790089
|
klmdr22
| 2025-09-02T05:15:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:15:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756790046
|
xinnn32
| 2025-09-02T05:15:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:15:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756788521
|
koloni
| 2025-09-02T05:14:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:14:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756789988
|
matherchodhuuu
| 2025-09-02T05:14:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:14:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ahmed-88889/llava-v1.6-mistral-7b-Scam_phrases_3epoch_coversation_clean_samples_v2
|
Ahmed-88889
| 2025-09-02T05:14:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T10:44:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenMOSE/RWKV-Reka-Flash-Gen2
|
OpenMOSE
| 2025-09-02T05:14:16Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T09:47:32Z |
---
license: apache-2.0
---
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756789990
|
omerbektass
| 2025-09-02T05:13:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:13:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-pawing_downy_anaconda_1756789968
|
AnerYubo
| 2025-09-02T05:12:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing downy anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:12:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing downy anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756789911
|
amandacute
| 2025-09-02T05:12:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:12:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-screeching_mute_lemur_1756789959
|
AnerYubo
| 2025-09-02T05:12:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching mute lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:12:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching mute lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756789719
|
akirafudo
| 2025-09-02T05:09:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:08:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
choiqs/Qwen2.5-1.5B-ultrachat-qrm-p16-g8-ts300-steps50-oracle-studentization-lr2e-6-warmup0.1-seed41
|
choiqs
| 2025-09-02T05:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:06:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756789424
|
aXsalll
| 2025-09-02T05:04:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:04:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756789337
|
akirafudo
| 2025-09-02T05:02:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:02:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-mlx
|
nightmedia
| 2025-09-02T05:01:16Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"finetune",
"brainstorm 20x",
"brainstorm",
"optional thinking",
"creative",
"all use cases",
"QiMing",
"QiMing-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"philosophy-driven-ai",
"text-generation",
"conversational",
"en",
"fr",
"zh",
"de",
"base_model:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"base_model:quantized:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-02T04:22:14Z |
---
license: apache-2.0
library_name: mlx
language:
- en
- fr
- zh
- de
tags:
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- qwen3
- finetune
- brainstorm 20x
- brainstorm
- optional thinking
- creative
- all use cases
- QiMing
- QiMing-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- philosophy-driven-ai
- mlx
base_model: DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL
pipeline_tag: text-generation
---
# Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-mlx
This model [Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-mlx](https://huggingface.co/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-mlx) was
converted to MLX format from [DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL](https://huggingface.co/DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL)
using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
AnerYubo/blockassist-bc-hairy_crested_fox_1756789247
|
AnerYubo
| 2025-09-02T05:00:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy crested fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:00:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy crested fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756789219
|
omerbektass
| 2025-09-02T05:00:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:00:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
choiqs/Qwen2.5-1.5B-ultrachat-qrm-p16-g8-ts300-steps50-oracle-studentization-lr2e-6-warmup0.1-seed40
|
choiqs
| 2025-09-02T05:00:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:00:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756789173
|
aXsalll
| 2025-09-02T05:00:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:59:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756789048
|
2hpsatt
| 2025-09-02T04:58:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:58:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiny-random/minicpm-v-4_5
|
tiny-random
| 2025-09-02T04:58:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"minicpmv",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"base_model:openbmb/MiniCPM-V-4_5",
"base_model:finetune:openbmb/MiniCPM-V-4_5",
"region:us"
] |
image-text-to-text
| 2025-09-02T04:58:35Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openbmb/MiniCPM-V-4_5
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openbmb/MiniCPM-V-4_5](https://huggingface.co/openbmb/MiniCPM-V-4_5).
### Example usage:
```python
import numpy as np
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model_id = "tiny-random/minicpm-v-4_5"
model = AutoModel.from_pretrained(model_id, trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16)
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
image = Image.fromarray(np.random.randint(0, 255, (224, 224, 3), dtype=np.uint8), 'RGB')
question = "What is the landform in the picture?"
msgs = [{'role': 'user', 'content': [image, question]}]
answer = model.chat(
msgs=msgs,
image=image,
tokenizer=tokenizer,
max_new_tokens=32,
)
print(answer)
# Second round chat, pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": [answer]})
msgs.append({"role": "user", "content": [
"What should I pay attention to when traveling here?"]})
answer = model.chat(
msgs=msgs,
image=None,
tokenizer=tokenizer,
max_new_tokens=32,
)
print(answer)
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModel,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
set_seed,
)
source_model_id = "openbmb/MiniCPM-V-4_5"
save_folder = "/tmp/tiny-random/minicpm-v-4_5"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model',), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for k, v in config_json['auto_map'].items():
config_json['auto_map'][k] = f'{source_model_id}--{v}'
automap = config_json['auto_map']
config_json['head_dim'] = 32
config_json["hidden_size"] = 128 # required by Sampler -- num_heads=embed_dim // 128
config_json['intermediate_size'] = 128
config_json['num_attention_heads'] = 2
config_json['num_key_value_heads'] = 1
config_json['num_hidden_layers'] = 2
config_json['tie_word_embeddings'] = True
# factor = config_json['rope_scaling']['long_factor']
# config_json['rope_scaling']['long_factor'] = factor[:16]
# config_json['rope_scaling']['short_factor'] = factor[:16]
config_json['vision_config']['intermediate_size'] = 128
config_json['vision_config']['hidden_size'] = 64
config_json['vision_config']['num_attention_heads'] = 2
config_json['vision_config']['num_hidden_layers'] = 2
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModel.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
num_params = sum(p.numel() for p in model.parameters())
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape, p.dtype, p.device, f'{p.numel() / num_params * 100: .2f}%')
pass
model.save_pretrained(save_folder)
def modify_automap(path, source_model_id):
import json
with open(path, 'r', encoding='utf-8') as f:
content = json.load(f)
automap = {}
if content.get('auto_map', None) is not None:
for key, value in content.get('auto_map').items():
if isinstance(value, str):
value = source_model_id + '--' + value.split('--')[-1]
else:
value = [(source_model_id + '--' + v.split('--')[-1]) for v in value]
automap[key] = value
with open(path, 'w', encoding='utf-8') as f:
json.dump({**content, 'auto_map': automap}, f, indent=2)
modify_automap(f"{save_folder}/config.json", source_model_id)
modify_automap(f'{save_folder}/processor_config.json', source_model_id)
modify_automap(f'{save_folder}/preprocessor_config.json', source_model_id)
modify_automap(f'{save_folder}/tokenizer_config.json', source_model_id)
for f in Path(save_folder).glob('*.py'):
f.unlink()
```
|
klmdr22/blockassist-bc-wild_loud_newt_1756789068
|
klmdr22
| 2025-09-02T04:58:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:58:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756789051
|
amandacute
| 2025-09-02T04:58:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:58:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756787380
|
coelacanthxyz
| 2025-09-02T04:57:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:57:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756788914
|
aXsalll
| 2025-09-02T04:55:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:55:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756788880
|
omerbektass
| 2025-09-02T04:55:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:54:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_4624
|
luckeciano
| 2025-09-02T04:53:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T00:23:14Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_4624
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_4624
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_4624", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/q3zdooe9)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_8146
|
luckeciano
| 2025-09-02T04:53:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T00:35:21Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_8146
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_8146
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_8146", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/wq85gqp4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stonesstones/img_tok_nusc_s_res84
|
stonesstones
| 2025-09-02T04:51:43Z | 310 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ourea_tokenizer",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-08-31T09:12:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756788604
|
aXsalll
| 2025-09-02T04:50:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:50:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ayousanz/piper-plus-base
|
ayousanz
| 2025-09-02T04:49:23Z | 10 | 2 | null |
[
"ja",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-08-28T06:07:28Z |
---
license: cc-by-sa-4.0
language:
- ja
---
# 日本語事前学習モデル-[piper-plus](https://github.com/ayutaz/piper-plus)
日本語のデータセット 100時間程度を一から学習した日本語特化の事前学習モデルです。学習がうまくいっていないので、今後より精度が高いものを公開予定です
|
hinoarashi/test4_act-policy-v3
|
hinoarashi
| 2025-09-02T04:48:35Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:hinoarashi/test4",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T04:48:21Z |
---
datasets: hinoarashi/test4
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
lanimatingl/fake-news-sentiment-model
|
lanimatingl
| 2025-09-02T04:44:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T02:45:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756788003
|
akirafudo
| 2025-09-02T04:40:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:40:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756787803
|
matherchodhuuu
| 2025-09-02T04:38:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:37:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756787805
|
amandacute
| 2025-09-02T04:37:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:37:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756787770
|
omerbkts
| 2025-09-02T04:36:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:36:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tencent/Hunyuan-1.8B-Instruct-GPTQ-Int4
|
tencent
| 2025-09-02T04:36:23Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-1.8B-Instruct",
"base_model:quantized:tencent/Hunyuan-1.8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-07-30T11:46:23Z |
---
base_model:
- tencent/Hunyuan-1.8B-Instruct
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers.
```SHELL
pip install "transformers>=4.56.0"
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
EmilRyd/qwen2.5-14b-calibrated
|
EmilRyd
| 2025-09-02T04:36:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"lora",
"transformers",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T22:00:08Z |
---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- axolotl
- base_model:adapter:Qwen/Qwen2.5-14B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
depth-anything/camera-depth-model-d405
|
depth-anything
| 2025-09-02T04:35:21Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-01T04:31:42Z |
---
license: cc-by-nc-4.0
---
This repository contains the camera depth model of the paper Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots.
Model inference guide: https://github.com/ByteDance-Seed/manip-as-in-sim-suite/tree/main/cdm
Project page: https://manipulation-as-in-simulation.github.io
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756787615
|
2hpsatt
| 2025-09-02T04:34:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:34:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nadsoft/emirate_model_version4.13
|
nadsoft
| 2025-09-02T04:28:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T09:50:03Z |
---
base_model: unsloth/Qwen3-8B
library_name: transformers
model_name: emirate_model_version4.13
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for emirate_model_version4.13
This model is a fine-tuned version of [unsloth/Qwen3-8B](https://huggingface.co/unsloth/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nadsoft/emirate_model_version4.13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nad-soft/huggingface/runs/0n5w1zqy)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756787205
|
indrarg
| 2025-09-02T04:27:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:27:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756787234
|
akirafudo
| 2025-09-02T04:27:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:27:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756787175
|
fakir22
| 2025-09-02T04:26:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:26:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tencent/Hunyuan-1.8B-Instruct-AWQ-Int4
|
tencent
| 2025-09-02T04:26:44Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-1.8B-Instruct",
"base_model:quantized:tencent/Hunyuan-1.8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2025-07-31T07:40:50Z |
---
base_model:
- tencent/Hunyuan-1.8B-Instruct
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers.
```SHELL
pip install "transformers>=4.56.0"
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756787060
|
aXsalll
| 2025-09-02T04:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:24:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiopuiter/blockassist-bc-armored_thriving_cod_1756787055
|
tiopuiter
| 2025-09-02T04:24:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored thriving cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:24:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored thriving cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pattarapon-boonpa/thai-gpt2-finetuned
|
pattarapon-boonpa
| 2025-09-02T04:24:26Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:23:29Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xxzws/hipporag-triple
|
xxzws
| 2025-09-02T04:24:19Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-09-01T10:48:55Z |
# HippRAG Model
[](https://opensource.org/licenses/Apache-2.0)
## Introduction / 介绍
**English:**
This model is designed for extracting Chinese HippoRAG triples (subject-predicate-object). It is trained exclusively on Chinese corpora but can be extended to simulate other languages via the provided training code. Based on Qwen3-1.7B, it supports extension to other models in the Qwen series; compatibility with other model families remains untested. The model also handles recognition of formats such as Markdown (MD) and LaTeX.
**中文:**
该模型专为提HippoRAG取中文三元组(主体-谓词-客体),仅使用中文语料进行训练,但可通过提供的训练代码扩展至模拟其他语言。基于Qwen3-1.7B,可扩展至Qwen系列的其他模型;与其他模型族的兼容性尚未验证。可支持Markdown(MD)、LaTeX等数据格式的识别。
## Usage / 使用
**English:**
The invocation method aligns with Qwen3 (refer to [Qwen3 Documentation](https://huggingface.co/Qwen)). Due to partial incompatibility of Transformers with certain inference environments, generation may continue indefinitely; it is advisable to incorporate a stop token like `"}]}` as a safeguard.
**中文:**
调用方式与Qwen3相同(参见[Qwen3文档](https://huggingface.co/Qwen))。由于Transformers与部分推理环境的不完全兼容,可能导致生成无休止,建议添加停止符`"}]}`作为双重保障。
## Training / 训练
**English:**
Given the lack of provided data, the model's performance is moderate. You can enhance it through further training using the code below (involving two datasets: one simple and one challenging).
**中文:**
由于缺乏外部数据支持,模型效果中等。可使用以下代码进行增量训练(涉及两个数据集:一个简单,一个较复杂)。
```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Windows 版:自定义复合损失 + 5轮课程学习(简单→复杂)
- 兼容 Windows 路径(Pathlib)
- 避免 Windows DataLoader 多进程问题(num_workers=0)
- 自动补 pad_token_id
- device_map="auto"(有 CUDA 则走 GPU)
"""
import os
import json
import math
import torch
import torch.nn.functional as F
from datetime import datetime
from tqdm import tqdm
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import load_dataset, concatenate_datasets
from accelerate import Accelerator
from pathlib import Path
# ========== 环境建议 ==========
# Windows 上建议显式关闭多核分词的线程提示
os.environ.setdefault("TOKENIZERS_PARALLELISM", "false")
# ========== 全局配置(按需修改) ==========
# 模型路径(Windows 示例)
MODEL_PATH = r"H:\model\qwen3"
# 训练数据(把这两个改成你的本机路径)
TRAIN_FILE = r"H:\data\train_fixed.json"
PARA_FILE = r"H:\data\paragraph_train.json"
# 输出目录(时间戳)
OUTPUT_ROOT = Path(r"H:\model") / f"qwen3_custom_ft_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
OUTPUT_ROOT.mkdir(parents=True, exist_ok=True)
print("🚀 当前可见 GPU 数量:", torch.cuda.device_count())
# ========== 主流程 ==========
def step4_with_curriculum():
print("=== Step4: 自定义复合损失 + 5轮课程学习(Windows 版) ===")
out_dir = OUTPUT_ROOT / "step4_custom_curriculum"
out_dir.mkdir(parents=True, exist_ok=True)
# 加载分词器与模型
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
if tokenizer.pad_token_id is None:
# 没有 pad_token 就用 eos 兜底
tokenizer.pad_token = tokenizer.eos_token if tokenizer.eos_token is not None else "</s>"
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
trust_remote_code=True,
device_map="auto" # 有 CUDA 会自动放 GPU
)
model.train()
for p in model.parameters():
if torch.is_floating_point(p):
p.requires_grad = True
# ========== 数据准备:simple_raw / complex_raw(再切两半)==========
# Windows 路径用字符串即可,datasets 内部兼容
orig_ds = load_dataset("json", data_files={"orig": str(TRAIN_FILE)})["orig"]
para_ds = load_dataset("json", data_files={"para": str(PARA_FILE)})["para"].shuffle(seed=42)
half = (len(para_ds) - len(orig_ds)) // 2 if (len(para_ds) > len(orig_ds)) else len(para_ds) // 2
simple_raw = concatenate_datasets([orig_ds, para_ds.select(range(half))])
complex_raw = para_ds.select(range(half, len(para_ds)))
# 将 complex 再切两半:c1 / c2
c_half = len(complex_raw) // 2
complex1_raw = complex_raw.select(range(c_half))
complex2_raw = complex_raw.select(range(c_half, len(complex_raw)))
complex_all_raw = concatenate_datasets([complex1_raw, complex2_raw])
all_raw = concatenate_datasets([simple_raw, complex_all_raw])
# ========== Prompt 构造 & 预处理 ==========
INSTR = (
"请从以下文本中抽取三元组,输出格式为标准JSON数组:\n"
"请务必严格输出JSON,不要附加说明文字。\n"
"字段: subject=主体, predicate=关系, object=客体;请尽可能提取所有相关关系且不要混淆主体与客体。\n\n"
)
def build_prompt(text):
return f"<|user|>\n{INSTR}{text}\n<|assistant|>\n"
MAX_LEN = 1024
def preprocess(ex):
# 兼容 input/output 可能是非字符串的情况
src_inp = ex.get("input", "")
tgt_out = ex.get("output", "")
if not isinstance(src_inp, str):
src_inp = str(src_inp)
if not isinstance(tgt_out, str):
tgt_out = json.dumps(tgt_out, ensure_ascii=False)
prompt = build_prompt(src_inp)
full = prompt + tgt_out
tok = tokenizer(
full,
max_length=MAX_LEN,
truncation=True,
padding="max_length",
return_tensors="pt"
)
ids = tok.input_ids[0]
mask = tok.attention_mask[0]
labels = ids.clone()
# 计算 prompt 长度,屏蔽其 loss
plen = tokenizer(prompt, return_tensors="pt").input_ids.size(1)
labels[:plen] = -100
# predicate 掩码(朴素 token 匹配)
pmask = torch.zeros_like(ids, dtype=torch.bool)
try:
# 这里 ex["output"] 若不是 JSON 字符串,会在上面改成字符串
preds = [t["predicate"] for t in json.loads(tgt_out)]
tokens = tokenizer.convert_ids_to_tokens(ids)
for pred in preds:
toks = tokenizer.tokenize(pred)
L = len(toks)
if L == 0:
continue
for i in range(len(tokens) - L + 1):
if tokens[i:i+L] == toks:
pmask[i:i+L] = True
except Exception:
pass
return {
"input_ids": ids,
"attention_mask": mask,
"labels": labels,
"predicate_mask": pmask
}
accel = Accelerator()
# Windows 上 datasets 的 map 默认单进程即可(避免多进程 spawn 麻烦)
with accel.main_process_first():
simple = simple_raw.map(preprocess, remove_columns=simple_raw.column_names)
complex1 = complex1_raw.map(preprocess, remove_columns=complex1_raw.column_names)
complex2 = complex2_raw.map(preprocess, remove_columns=complex2_raw.column_names)
complex_all = complex_all_raw.map(preprocess, remove_columns=complex_all_raw.column_names)
all_ds = all_raw.map(preprocess, remove_columns=all_raw.column_names)
for ds in (simple, complex1, complex2, complex_all, all_ds):
ds.set_format(type="torch", columns=["input_ids", "attention_mask", "labels", "predicate_mask"])
# DataLoader:Windows 下稳妥用单进程
bs = 4
num_workers = 0 # ★ Windows:0 最稳,避免多进程卡死
dl_args = dict(batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=torch.cuda.is_available())
simple_loader = DataLoader(simple, **dl_args)
complex1_loader = DataLoader(complex1, **dl_args)
complex2_loader = DataLoader(complex2, **dl_args)
complex_all_loader = DataLoader(complex_all, **dl_args)
all_loader = DataLoader(all_ds, **dl_args)
optimizer = AdamW(model.parameters(), lr=5e-5)
(model, optimizer,
simple_loader, complex1_loader, complex2_loader, complex_all_loader, all_loader
) = accel.prepare(model, optimizer,
simple_loader, complex1_loader, complex2_loader, complex_all_loader, all_loader)
# ========== 训练参数 ==========
alpha, beta, delta = 1.0, 1.0, 0.2
grad_accum = 4
rounds = 8 # 固定 8 轮(你原注释写 5 轮,代码里是 8,我保持 8)
# ========== 训练子流程 ==========
def train_progressive_mix(loader_s, loader_c, round_idx):
"""第1轮:简单→复杂概率线性上升"""
total_steps = max(len(loader_s), len(loader_c))
it_s, it_c = iter(loader_s), iter(loader_c)
total_loss, step_count = 0.0, 0
for step in tqdm(range(total_steps), desc=f"Round {round_idx+1} (progressive mix)"):
p = (step + 1) / total_steps
pick_complex = torch.rand(1).item() < p
if pick_complex:
try:
batch = next(it_c)
except StopIteration:
it_c = iter(loader_c)
batch = next(it_c)
else:
try:
batch = next(it_s)
except StopIteration:
it_s = iter(loader_s)
batch = next(it_s)
loss = compute_loss(model, batch, tokenizer, alpha, beta-0.5, delta)
accel.backward(loss)
if (step + 1) % grad_accum == 0:
accel.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step(); optimizer.zero_grad()
total_loss += loss.item()
step_count += 1
return total_loss, step_count
def train_uniform_among_loaders(loaders, round_idx):
"""第2/3轮:按数据源均等采样(轮转)"""
k = len(loaders)
max_len = max(len(l) for l in loaders)
steps = max_len * k
iters = [iter(l) for l in loaders]
total_loss, step_count = 0.0, 0
for step in tqdm(range(steps), desc=f"Round {round_idx+1} (uniform across {k} sources)"):
idx = step % k
try:
batch = next(iters[idx])
except StopIteration:
iters[idx] = iter(loaders[idx])
batch = next(iters[idx])
loss = compute_loss(model, batch, tokenizer, alpha, beta-0.3, delta)
accel.backward(loss)
if (step + 1) % grad_accum == 0:
accel.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step(); optimizer.zero_grad()
total_loss += loss.item()
step_count += 1
return total_loss, step_count
def train_single_loader(loader, round_idx):
"""第4/5+轮:全量顺序训练"""
total_loss, step_count = 0.0, 0
for step, batch in enumerate(tqdm(loader, desc=f"Round {round_idx+1} (full data)")):
loss = compute_loss(model, batch, tokenizer, alpha, beta, delta)
accel.backward(loss)
if (step + 1) % grad_accum == 0:
accel.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step(); optimizer.zero_grad()
total_loss += loss.item()
step_count += 1
return total_loss, step_count
# ========== 五(八)轮课程学习 ==========
for r in range(rounds):
if r == 0:
tot, cnt = train_progressive_mix(simple_loader, complex_all_loader, r)
elif r == 1:
tot, cnt = train_uniform_among_loaders([simple_loader, complex1_loader], r)
elif r == 2:
tot, cnt = train_uniform_among_loaders([simple_loader, complex1_loader, complex2_loader], r)
else:
tot, cnt = train_single_loader(all_loader, r)
avg_loss = tot / max(1, cnt)
print(f"✅ Round {r+1} avg loss: {avg_loss:.4f}")
# 保存
if accel.is_main_process:
unwrapped = accel.unwrap_model(model)
unwrapped.save_pretrained(str(out_dir), safe_serialization=True)
tokenizer.save_pretrained(str(out_dir))
print("💾 保存至", out_dir)
# ========== 损失函数 ==========
def compute_loss(model, batch, tokenizer, alpha, beta, delta):
outputs = model(
input_ids=batch["input_ids"],
attention_mask=batch["attention_mask"],
labels=batch["labels"]
)
ce_loss = outputs.loss
# F1 on predicate tokens(embedding 近似 P/R)
pred_ids = outputs.logits.argmax(dim=-1)
mask_flat = batch["predicate_mask"].view(-1)
labels_flat = batch["labels"].view(-1)
pred_flat = pred_ids.view(-1)
valid_idx = mask_flat.nonzero(as_tuple=True)[0]
if valid_idx.numel() > 0:
true_ids = labels_flat[valid_idx]
pred_sel = pred_flat[valid_idx]
emb = model.get_input_embeddings()
vocab_sz = emb.num_embeddings
legal = (
(true_ids >= 0) & (true_ids < vocab_sz) &
(pred_sel >= 0) & (pred_sel < vocab_sz)
)
if legal.sum() > 0:
true_ids = true_ids[legal]
pred_sel = pred_sel[legal]
t_emb = emb(true_ids)
p_emb = emb(pred_sel)
S = F.cosine_similarity(t_emb.unsqueeze(1), p_emb.unsqueeze(0), dim=-1)
P_val = S.max(dim=1).values.mean()
R_val = S.max(dim=0).values.mean()
F1 = 2 * P_val * R_val / (P_val + R_val + 1e-8)
else:
F1 = torch.tensor(1.0, device=ce_loss.device)
else:
F1 = torch.tensor(1.0, device=ce_loss.device)
# 非结构输出惩罚
illegal = (batch["labels"] == -100) & (pred_ids != tokenizer.pad_token_id)
x = illegal.sum().float().clamp(min=0.0)
penalty = 1.0 - 1.0 / torch.log(x + 10.0)
return alpha * ce_loss + beta * (1 - F1) + delta * penalty
# ========== 入口 ==========
if __name__ == "__main__":
# Windows 下推荐:
# 1) Python 3.10/3.11 + torch/cu 版本匹配
# 2) 先把 TRAIN_FILE / PARA_FILE 改成你的真实路径
step4_with_curriculum()
print("🎉 完成")
```
## Training Logic / 训练逻辑
**English:**
The training adopts a curriculum learning strategy across 8 rounds, incorporating a composite loss function. Denote the simple dataset as $D_s$, the halves of the complex dataset as $D_{c1}$ and $D_{c2}$, with $D_c = D_{c1} \cup D_{c2}$, and $D_a = D_s \cup D_c$.
- **Round 1:** Progressive mixing: For each step $t = 1$ to $T = \max(|D_s|, |D_c|)$, sample from $D_c$ with probability $p_t = t / T$, otherwise from $D_s$. Loss: $L = \alpha \cdot L_{CE} + (\beta - 0.5) \cdot (1 - F1_p) + \delta \cdot P$, where $L_{CE}$ is cross-entropy loss, $F1_p$ approximates the F1-score on predicate tokens using cosine similarity of embeddings, and $P = 1 - 1 / \log(x + 10)$ penalizes non-structured outputs with $x$ being the count of illegal tokens.
- **Round 2:** Uniform sampling across $\{D_s, D_{c1}\}$: Cycle through loaders for $T = 2 \cdot \max(|D_s|, |D_{c1}|)$ steps, using $\beta - 0.3$.
- **Round 3:** Uniform across $\{D_s, D_{c1}, D_{c2}\}$: Similarly, $T = 3 \cdot \max$ over the three, using $\beta - 0.3$.
- **Rounds 4-8:** Full sequential training on $D_a$, employing the full $\beta$.
Optimization: AdamW with learning rate $5 \times 10^{-5}$, gradient accumulation every 4 steps, and clipping at 1.0. Parameters: $\alpha=1.0$, $\beta=1.0$, $\delta=0.2$.
**中文:**
训练采用8轮课程学习策略,结合复合损失函数。设简单数据集为$D_s$,复杂数据集的两半为$D_{c1}$和$D_{c2}$,$D_c = D_{c1} \cup D_{c2}$,$D_a = D_s \cup D_c$。
- **第1轮:** 渐进混合:对于每个步$t = 1$到$T = \max(|D_s|, |D_c|)$,以概率$p_t = t / T$从$D_c$采样,否则从$D_s$。损失:$L = \alpha \cdot L_{CE} + (\beta - 0.5) \cdot (1 - F1_p) + \delta \cdot P$,其中$L_{CE}$为交叉熵损失,$F1_p$通过嵌入余弦相似度近似谓词token的F1分数,$P = 1 - 1 / \log(x + 10)$惩罚非结构输出($x$为非法token数)。
- **第2轮:** 在$\{D_s, D_{c1}\}$上均匀采样:循环加载器$T = 2 \cdot \max(|D_s|, |D_{c1}|)$步,使用$\beta - 0.3$。
- **第3轮:** 在$\{D_s, D_{c1}, D_{c2}\}$上均匀采样:类似,$T = 3 \cdot \max$三者,使用$\beta - 0.3$。
- **第4-8轮:** 在$D_a$上全量顺序训练,使用完整$\beta$。
优化:AdamW,学习率$5 \times 10^{-5}$,每4步梯度累积,裁剪1.0。参数:$\alpha=1.0$,$\beta=1.0$,$\delta=0.2$。
|
leeminwaan/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF
|
leeminwaan
| 2025-09-02T04:23:46Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-02T04:14:31Z |
---
license: apache-2.0
base_model: gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts
pipeline_tag: text-generation
---
# Model Card for gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF
This repository contains multiple quantized versions of the gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts model in GGUF format.
It is intended for efficient inference on consumer hardware, making large model deployment more accessible.
## Model Details
### Model Description
- **Developed by:** leeminwaan
- **Funded by [optional]:** Independent project
- **Shared by [optional]:** leeminwaan
- **Model type:** Decoder-only transformer language model
- **Language(s) (NLP):** English (primary), multilingual capabilities not benchmarked
- **License:** Apache-2.0
### Model Sources
- **Repository:** [Hugging Face Repo](https://huggingface.co/leeminwaan/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF)
- **Paper [optional]:** Not available
- **Demo [optional]:** To be released
## How to Get Started with the Model
```python
from huggingface_hub import hf_hub_download
model_path = hf_hub_download("leeminwaan/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF", "gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-q4_k_m.gguf")
print("Downloaded:", model_path)
````
Quantized versions available:
* Q2\_K, Q3\_K\_S, Q3\_K\_M, Q3\_K\_L
* Q4\_0, Q4\_1, Q4\_K\_S, Q4\_K\_M
* Q5\_0, Q5\_1, Q5\_K\_S, Q5\_K\_M
* Q6\_K, Q8\_0
## Training Details
### Training Data
* Based on gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts pretraining corpus (public large-scale web text, open datasets).
* No additional fine-tuning was performed for this release.
### Training Procedure
* Original gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts → quantized to GGUF formats.
### Quantization Results
| Quantization | Size (vs. FP16) | Speed | Quality | Recommended For |
|--------------|-----------------|-----------|------------|--------------------------------------|
| Q2_K | Smallest | Fastest | Low | Prototyping, minimal RAM/CPU |
| Q3_K_S | Very Small | Very Fast | Low-Med | Lightweight devices, testing |
| Q3_K_M | Small | Fast | Med | Lightweight, slightly better quality |
| Q3_K_L | Small-Med | Fast | Med | Faster inference, fair quality |
| Q4_0 | Medium | Fast | Good | General use, chats, low RAM |
| Q4_1 | Medium | Fast | Good+ | Recommended, slightly better quality |
| Q4_K_S | Medium | Fast | Good+ | Recommended, balanced |
| Q4_K_M | Medium | Fast | Good++ | Recommended, best Q4 option |
| Q5_0 | Larger | Moderate | Very Good | Chatbots, longer responses |
| Q5_1 | Larger | Moderate | Very Good+ | More demanding tasks |
| Q5_K_S | Larger | Moderate | Very Good+ | Advanced users, better accuracy |
| Q5_K_M | Larger | Moderate | Excellent | Demanding tasks, high quality |
| Q6_K | Large | Slower | Near FP16 | Power users, best quantized quality |
| Q8_0 | Largest | Slowest | FP16-like | Maximum quality, high RAM/CPU |
> **Note:**
> - Lower quantization = smaller model, faster inference, but lower output quality.
> - Q4_K_M is ideal for most users; Q6_K/Q8_0 offer the highest quality, best for advanced use.
> - All quantizations are suitable for consumer hardware—select based on your quality/speed needs.
## Technical Specifications
#### Software
* llama.cpp for quantization
* Python 3.10, huggingface\_hub
## Citation
**BibTeX:**
```bibtex
@miscgpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF,
title=gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF Quantized Models},
author={leeminwaan},
year={2025},
howpublished={\url{https://huggingface.co/leeminwaan/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF}}
}
```
**APA:**
```
leeminwaan. (2025). gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF Quantized Models [Computer software]. Hugging Face. https://huggingface.co/leeminwaan/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-GGUF
```
## Glossary
* **Quantization:** Reducing precision of weights to lower memory usage.
* **GGUF:** Optimized format for llama.cpp inference.
## More Information
* This project is experimental.
* Expect further updates and quantization benchmarks.
## Model Card Authors
* leeminwaan
## Model Card Contact
* Hugging Face: [leeminwaan](https://huggingface.co/leeminwaan)
|
MowVNB/blockassist-bc-feline_grazing_macaw_1756785178
|
MowVNB
| 2025-09-02T04:23:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feline grazing macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:22:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feline grazing macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756786860
|
akirafudo
| 2025-09-02T04:21:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:21:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756786802
|
liukevin666
| 2025-09-02T04:21:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:20:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756786810
|
indrarg
| 2025-09-02T04:21:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:20:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thewisp/smolvla_move_cube_v2
|
thewisp
| 2025-09-02T04:20:40Z | 11 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:thewisp/move-cube-v2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-31T22:55:39Z |
---
base_model: lerobot/smolvla_base
datasets: thewisp/move-cube-v2
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
openbmb/MiniCPM-V-4_5-AWQ
|
openbmb
| 2025-09-02T04:20:15Z | 531 | 4 |
transformers
|
[
"transformers",
"safetensors",
"minicpmv",
"feature-extraction",
"minicpm-v",
"vision",
"ocr",
"multi-image",
"video",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:openbmb/RLAIF-V-Dataset",
"arxiv:2403.11703",
"4-bit",
"awq",
"region:us"
] |
image-text-to-text
| 2025-08-26T07:03:13Z |
---
pipeline_tag: image-text-to-text
datasets:
- openbmb/RLAIF-V-Dataset
library_name: transformers
language:
- multilingual
tags:
- minicpm-v
- vision
- ocr
- multi-image
- video
- custom_code
---
<h1>A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone</h1>
[GitHub](https://github.com/OpenBMB/MiniCPM-o) | [CookBook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) | [Demo](http://101.126.42.235:30910/)</a>
## MiniCPM-V 4.5
**MiniCPM-V 4.5** is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include:
- 🔥 **State-of-the-art Vision-Language Capability.**
MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B** for vision-language capabilities, making it the most performant MLLM under 30B parameters.
- 🎬 **Efficient High-FPS and Long Video Understanding.** Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can perceive significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high-FPS (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently.
- ⚙️ **Controllable Hybrid Fast/Deep Thinking.** MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion.
- 💪 **Strong OCR, Document Parsing and Others.**
Based on [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x less visual tokens than most MLLMs. The model achieves **leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5**. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o-latest on MMHal-Bench, and supports **multilingual capabilities** in more than 30 languages.
- 💫 **Easy Usage.**
MiniCPM-V 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/Support-MiniCPM-V-4.5/docs/multimodal/minicpmv4.5.md) and [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4), [GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) and [AWQ](https://github.com/tc-mb/AutoAWQ) format quantized models in 16 sizes, (3) [SGLang](https://github.com/tc-mb/sglang/tree/main) and [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [Transformers](https://github.com/tc-mb/transformers/tree/main) and [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), (6) optimized [local iOS app](https://github.com/tc-mb/MiniCPM-o-demo-iOS) on iPhone and iPad, and (7) online web demo on [server](http://101.126.42.235:30910/). See our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) for full usages!
### Key Techniques
<div align="center">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpm-v-4dot5-framework.png" , width=100%>
</div>
- **Architechture: Unified 3D-Resampler for High-density Video Compression.** MiniCPM-V 4.5 introduces a 3D-Resampler that overcomes the performance-efficiency trade-off in video understanding. By grouping and jointly compressing up to 6 consecutive video frames into just 64 tokens (the same token count used for a single image in MiniCPM-V series), MiniCPM-V 4.5 achieves a 96× compression rate for video tokens. This allows the model to process more video frames without additional LLM computational cost, enabling high-FPS video and long video understanding. The architecture supports unified encoding for images, multi-image inputs, and videos, ensuring seamless capability and knowledge transfer.
- **Pre-training: Unified Learning for OCR and Knowledge from Documents.** Existing MLLMs learn OCR capability and knowledge from documents in isolated training approaches. We observe that the essential difference between these two training approaches is the visibility of the text in images. By dynamically corrupting text regions in documents with varying noise levels and asking the model to reconstruct the text, the model learns to adaptively and properly switch between accurate text recognition (when text is visible) and multimodal context-based knowledge reasoning (when text is heavily obscured). This eliminates reliance on error-prone document parsers in knowledge learning from documents, and prevents hallucinations from over-augmented OCR data, resulting in top-tier OCR and multimodal knowledge performance with minimal engineering overhead.
- **Post-training: Hybrid Fast/Deep Thinking with Multimodal RL.** MiniCPM-V 4.5 offers a balanced reasoning experience through two switchable modes: fast thinking for efficient daily use and deep thinking for complex tasks. Using a new hybrid reinforcement learning method, the model jointly optimizes both modes, significantly enhancing fast-mode performance without compromising deep-mode capability. Incorporated with [RLPR](https://github.com/OpenBMB/RLPR) and [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), it generalizes robust reasoning skills from broad multimodal data while effectively reducing hallucinations.
### Evaluation
<div align="center">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/radar_minicpm_v45.png", width=60%>
</div>
<div align="center">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv_4_5_evaluation_result.png" , width=100%>
</div>
### Inference Efficiency
**OpenCompass**
<div align="left">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Avg Score ↑</th>
<th>Total Inference Time ↓</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td nowrap="nowrap" align="left">GLM-4.1V-9B-Thinking</td>
<td>10.3B</td>
<td>76.6</td>
<td>17.5h</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiMo-VL-7B-RL</td>
<td>8.3B</td>
<td>76.4</td>
<td>11h</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-V 4.5</td>
<td>8.7B</td>
<td><b>77.0</td>
<td><b>7.5h</td>
</tr>
</tbody>
</table>
</div>
**Video-MME**
<div align="left">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Avg Score ↑</th>
<th>Total Inference Time ↓</th>
<th>GPU Mem ↓</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td nowrap="nowrap" align="left">Qwen2.5-VL-7B-Instruct</td>
<td>8.3B</td>
<td>71.6</td>
<td>3h</td>
<td>60G</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GLM-4.1V-9B-Thinking</td>
<td>10.3B</td>
<td><b>73.6</td>
<td>2.63h</td>
<td>32G</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-V 4.5</td>
<td>8.7B</td>
<td>73.5</td>
<td><b>0.26h</td>
<td><b>28G</td>
</tr>
</tbody>
</table>
</div>
Both Video-MME and OpenCompass were evaluated using 8×A100 GPUs for inference. The reported inference time of Video-MME includes full model-side computation, and excludes the external cost of video frame extraction (dependent on specific frame extraction tools) for fair comparison.
### Examples
<div align="center">
<a href="https://www.youtube.com/watch?v=Cn23FujYMMU"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/MiniCPM-V%204.5-8.26_img.jpeg", width=70%></a>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case1.png" alt="en_case1" style="margin-bottom: 5px;">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case2.png" alt="en_case2" style="margin-bottom: 5px;">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case3.jpeg" alt="en_case3" style="margin-bottom: 5px;">
</div>
We deploy MiniCPM-V 4.5 on iPad M4 with [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS). The demo video is the raw screen recording without editing.
<div align="center">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_cot.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
</div>
<div align="center">
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
<img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_travel.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
</div>
## Framework Support Matrix
<table>
<thead>
<tr>
<th>Category</th>
<th>Framework</th>
<th>Cookbook Link</th>
<th>Upstream PR</th>
<th>Supported since (branch)</th>
<th>Supported since (release)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">Edge (On-device)</td>
<td>Llama.cpp</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_5_llamacpp.md">Llama.cpp Doc</a></td>
<td><a href="https://github.com/ggml-org/llama.cpp/pull/15575">#15575</a> (2025-08-26)</td>
<td>master (2025-08-26)</td>
<td><a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6282">b6282</a></td>
</tr>
<tr>
<td>Ollama</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_5_ollama.md">Ollama Doc</a></td>
<td><a href="https://github.com/ollama/ollama/pull/12078">#12078</a> (2025-08-26)</td>
<td>Merging</td>
<td>Waiting for official release</td>
</tr>
<tr>
<td rowspan="2">Serving (Cloud)</td>
<td>vLLM</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_5_vllm.md">vLLM Doc</a></td>
<td><a href="https://github.com/vllm-project/vllm/pull/23586">#23586</a> (2025-08-26)</td>
<td>main (2025-08-27)</td>
<td>Waiting for official release</td>
</tr>
<tr>
<td>SGLang</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_5_sglang.md">SGLang Doc</a></td>
<td><a href="https://github.com/sgl-project/sglang/pull/9610">#9610</a> (2025-08-26)</td>
<td>Merging</td>
<td>Waiting for official release</td>
</tr>
<tr>
<td>Finetuning</td>
<td>LLaMA-Factory</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_llamafactory.md">LLaMA-Factory Doc</a></td>
<td><a href="https://github.com/hiyouga/LLaMA-Factory/pull/9022">#9022</a> (2025-08-26)</td>
<td>main (2025-08-26)</td>
<td>Waiting for official release</td>
</tr>
<tr>
<td rowspan="3">Quantization</td>
<td>GGUF</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_5_gguf_quantize.md">GGUF Doc</a></td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>BNB</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/bnb/minicpm-v4_5_bnb_quantize.md">BNB Doc</a></td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>AWQ</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/awq/minicpm-v4_5_awq_quantize.md">AWQ Doc</a></td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>Demos</td>
<td>Gradio Demo</td>
<td><a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/gradio/README.md">Gradio Demo Doc</a></td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
</tbody>
</table>
> Note: If you'd like us to prioritize support for another open-source framework, please let us know via this [short form](https://docs.google.com/forms/d/e/1FAIpQLSdyTUrOPBgWqPexs3ORrg47ZcZ1r4vFQaA4ve2iA7L9sMfMWw/viewform).
## Usage
If you wish to enable thinking mode, provide the argument `enable_thinking=True` to the chat function.
#### Chat with Image
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
torch.manual_seed(100)
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB')
enable_thinking=False # If `enable_thinking=True`, the thinking mode is enabled.
stream=True # If `stream=True`, the answer is string
# First round chat
question = "What is the landform in the picture?"
msgs = [{'role': 'user', 'content': [image, question]}]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
enable_thinking=enable_thinking,
stream=True
)
generated_text = ""
for new_text in answer:
generated_text += new_text
print(new_text, flush=True, end='')
# Second round chat, pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": [answer]})
msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]})
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
stream=True
)
generated_text = ""
for new_text in answer:
generated_text += new_text
print(new_text, flush=True, end='')
```
You will get the following output:
```shell
# round1
The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion.
This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views.
# round2
When traveling to a karst landscape like this, here are some important tips:
1. Wear comfortable shoes: The terrain can be uneven and hilly.
2. Bring water and snacks for energy during hikes or boat rides.
3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots.
4. Respect local customs and nature regulations by not littering or disturbing wildlife.
By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains.
```
#### Chat with Video
```python
## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids.
# To achieve this, you need to organize your video data into two corresponding sequences:
# frames: List[Image]
# temporal_ids: List[List[Int]].
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
from decord import VideoReader, cpu # pip install decord
from scipy.spatial import cKDTree
import numpy as np
import math
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING.
MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6
TIME_SCALE = 0.1
def map_to_nearest_scale(values, scale):
tree = cKDTree(np.asarray(scale)[:, None])
_, indices = tree.query(np.asarray(values)[:, None])
return np.asarray(scale)[indices]
def group_array(arr, size):
return [arr[i:i+size] for i in range(0, len(arr), size)]
def encode_video(video_path, choose_fps=3, force_packing=None):
def uniform_sample(l, n):
gap = len(l) / n
idxs = [int(i * gap + gap / 2) for i in range(n)]
return [l[i] for i in idxs]
vr = VideoReader(video_path, ctx=cpu(0))
fps = vr.get_avg_fps()
video_duration = len(vr) / fps
if choose_fps * int(video_duration) <= MAX_NUM_FRAMES:
packing_nums = 1
choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration))
else:
packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES)
if packing_nums <= MAX_NUM_PACKING:
choose_frames = round(video_duration * choose_fps)
else:
choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING)
packing_nums = MAX_NUM_PACKING
frame_idx = [i for i in range(0, len(vr))]
frame_idx = np.array(uniform_sample(frame_idx, choose_frames))
if force_packing:
packing_nums = min(force_packing, MAX_NUM_PACKING)
print(video_path, ' duration:', video_duration)
print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}')
frames = vr.get_batch(frame_idx).asnumpy()
frame_idx_ts = frame_idx / fps
scale = np.arange(0, video_duration, TIME_SCALE)
frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE
frame_ts_id = frame_ts_id.astype(np.int32)
assert len(frames) == len(frame_ts_id)
frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames]
frame_ts_id_group = group_array(frame_ts_id, packing_nums)
return frames, frame_ts_id_group
video_path="video_test.mp4"
fps = 5 # fps for video
force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration.
frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing)
question = "Describe the video"
msgs = [
{'role': 'user', 'content': frames + [question]},
]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
use_image_id=False,
max_slice_nums=1,
temporal_ids=frame_ts_id_group
)
print(answer)
```
#### Chat with multiple images
<details>
<summary> Click to show Python code running MiniCPM-V 4.5 with multiple images input. </summary>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True)
image1 = Image.open('image1.jpg').convert('RGB')
image2 = Image.open('image2.jpg').convert('RGB')
question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
msgs = [{'role': 'user', 'content': [image1, image2, question]}]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
#### In-context few-shot learning
<details>
<summary> Click to view Python code running MiniCPM-V 4.5 with few-shot input. </summary>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16)
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True)
question = "production date"
image1 = Image.open('example1.jpg').convert('RGB')
answer1 = "2023.08.04"
image2 = Image.open('example2.jpg').convert('RGB')
answer2 = "2007.04.24"
image_test = Image.open('test.jpg').convert('RGB')
msgs = [
{'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
{'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
{'role': 'user', 'content': [image_test, question]}
]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
## License
#### Model License
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM-o/blob/main/MiniCPM%20Model%20License.md).
* The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 4.5 weights are also available for free commercial use.
#### Statement
* As an LMM, MiniCPM-V 4.5 generates contents by learning a large amount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 4.5 does not represent the views and positions of the model developers
* We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Key Techniques and Other Multimodal Projects
👏 Welcome to explore key techniques of MiniCPM-V 4.5 and other multimodal projects of our team:
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
## Citation
If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
```bib
@article{yao2024minicpm,
title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
journal={Nat Commun 16, 5509 (2025)},
year={2025}
}
```
|
conPoom/AI-thai-model
|
conPoom
| 2025-09-02T04:18:47Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:01:40Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Onair7234/thai-qa-lab-model
|
Onair7234
| 2025-09-02T04:18:31Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:13:54Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756786629
|
omerbkts
| 2025-09-02T04:17:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:17:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756786612
|
amandacute
| 2025-09-02T04:17:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:17:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NousResearch/Hermes-4-405B
|
NousResearch
| 2025-09-02T04:17:04Z | 252 | 49 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3.1",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"conversational",
"en",
"arxiv:2508.18255",
"base_model:meta-llama/Llama-3.1-405B",
"base_model:finetune:meta-llama/Llama-3.1-405B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T02:51:17Z |
---
language:
- en
license: llama3
tags:
- Llama-3.1
- instruct
- finetune
- reasoning
- hybrid-mode
- chatml
- function calling
- tool use
- json mode
- structured outputs
- atropos
- dataforge
- long context
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-405B
library_name: transformers
widget:
- example_title: Hermes 4
messages:
- role: system
content: >-
You are Hermes 4, a capable, neutrally-aligned assistant. Prefer concise,
correct answers.
- role: user
content: >-
Explain the difference between BFS and DFS to a new CS student.
model-index:
- name: Hermes-4-Llama-3.1-405B
results: []
---
# Hermes 4 — Llama-3.1 405B

## Model Description
Hermes 4 405B is a frontier, hybrid-mode **reasoning** model based on Llama-3.1-405B by Nous Research that is aligned to **you**.
Read the Hermes 4 technical report here: <a href="https://arxiv.org/abs/2508.18255">Hermes 4 Technical Report</a>
Chat with Hermes in Nous Chat: https://chat.nousresearch.com
Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment.
## What’s new vs Hermes 3
- **Post-training corpus**: Massively increased dataset size from 1M samples and 1.2B tokens to **~5M samples / ~60B tokens** blended across reasoning and non-reasoning data.
- **Hybrid reasoning mode** with explicit `<think>…</think>` segments when the model decides to deliberate, and options to make your responses faster when you want.
- **Reasoning** that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses.
- **Schema adherence & structured outputs**: trained to produce valid JSON for given schemas and to repair malformed objects.
- **Much easier to steer and align**: extreme improvements on steerability, especially on reduced refusal rates.
## Our Mission: Frontier Capabilities Aligned to You
In pursuit of the mission of producing models that are open, steerable and capable of producing the full range of human expression, while being able to be aligned to your values, we created a new benchmark, RefusalBench, that tests the models willingness to be helpful in a variety of scenarios commonly disallowed by closed and open models.

Hermes 4 achieves SOTA on RefusalBench across all popular closed and open models in being helpful and conforming to your values, without censorship.
## Benchmarks (Hermes 4 405B)

> Full tables, settings, and comparisons are in the technical report.
## Prompt Format
Hermes 4 uses Llama-3-Chat format with role headers and special tags.
**Basic chat:**
```
<|start_header_id|>system<|end_header_id|>
You are Hermes 4. Be concise and helpful.<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Explain the photoelectric effect simply.<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
### Reasoning mode
Reasoning mode can be activated with the chat template via the flag `thinking=True` or by using the following system prompt:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
Note that you can add any additional system instructions before or after this system message, and it will adjust the models policies, style, and effort of thinking, as well as its post-thinking style, format, identity, and more. You may also interleave the tool definition system message with the reasoning one.
When the model chooses to deliberate, it emits:
```
<|start_header_id|>assistant<|end_header_id|>
<think>
…model’s internal reasoning may appear here…
</think>
Final response starts here…<|eot_id|>
```
Additionally, we provide a flag to keep the content inbetween the `<think> ... </think>` that you can play with by setting `keep_cots=True`
## Function Calling & Tool Use
Hermes 4 supports function/tool calls *within* a single assistant turn, interleaved with its reasoning:
**System message (example):**
```
<|start_header_id|>system<|end_header_id|>
You are a function-calling AI. Tools are provided inside <tools>…</tools>.
When appropriate, call a tool by emitting a <tool_call>{...}</tool_call> object.
After a tool responds (as <tool_response>), continue reasoning inside <think> and produce the final answer.
<tools>
{"type":"function","function":{"name":"get_weather","description":"Get weather by city","parameters":{"type":"object","properties":{"city":{"type":"string"}},"required":["city"]}}}
</tools><|eot_id|>
```
Note that you may also simply place tool definitions into the "tools:" field of your messages, and the chat template will parse and create the system prompt for you. This also works with reasoning mode for improved accuracy of tool use.
The model will then generate tool calls within `<tool_call> {tool_call} </tool_call>` tags, for easy parsing. The tool_call tags are also added tokens, so it makes it easy to parse while streaming! There are also automatic tool parsers built-in to VLLM and SGLang for Hermes, just set the tool parser in VLLM to `hermes` and in SGLang to `qwen25`.
## Inference Notes
- **Sampling defaults that work well:** `temperature=0.6, top_p=0.95, top_k=20`.
- **Template:** Use the Llama chat format for Hermes 4 70B and 405B as shown above, or set `add_generation_prompt=True` when using `tokenizer.apply_chat_template(...)`.
### Transformers example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "NousResearch/Hermes-4-Llama-3.1-405B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
messages = [
{"role":"system","content":"You are Hermes 4. Be concise."},
{"role":"user","content":"Summarize CRISPR in 3 sentences."}
]
inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
**inputs, max_new_tokens=400, temperature=0.6, top_p=0.95, top_k=20, do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For production serving on multi-GPU nodes, consider tensor parallel inference engines (e.g., SGLang/vLLM backends) with prefix caching.
## Inference Providers:
### Nous Portal:
<a href="https://portal.nousresearch.com"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/6YytY7N0mjCnBQvWo3qtv.png"></a>
### Chutes:
<a href="https://chutes.ai/app"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/l14AWPv6cSvaprpwK_IWY.png"></a>
### Nebius:
<a href="https://nebius.com/services/studio-inference-service">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vhL0oAomFa_awBdt2KF_x.png">
<source media="(prefers-color-scheme: light)" srcset="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png">
<img width=256 alt="nebius.com logo" src="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png">
</picture>
</a>
### Luminal:
<a href="https://luminalai.com/">
<img width=256 alt="luminal logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/FIHsRdjMMP0HUjebiuJyH.png">
</a>
# Quantized / Smaller Variants
Hermes 4 is available as BF16 original weights as well as FP8 variants and GGUF variants by LM Studio.
FP8: https://huggingface.co/NousResearch/Hermes-4-405B-FP8
GGUF (Courtesy of LM Studio team!):
https://huggingface.co/lmstudio-community/Hermes-4-405B-GGUF
Hermes 4 is also available in smaller sizes (e.g., 70B and 14B) with similar prompt formats.
See the Hermes 4 collection to explore them all:
https://huggingface.co/collections/NousResearch/hermes-4-collection-68a731bfd452e20816725728
# How to cite
```bibtex
@misc{teknium2025hermes4technicalreport,
title={Hermes 4 Technical Report},
author={Ryan Teknium and Roger Jin and Jai Suphavadeeprasit and Dakota Mahan and Jeffrey Quesnelle and Joe Li and Chen Guang and Shannon Sands and Karan Malhotra},
year={2025},
eprint={2508.18255},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.18255},
}
```
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756786522
|
matherchodhuuu
| 2025-09-02T04:16:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:16:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756786251
|
xinnn32
| 2025-09-02T04:12:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:11:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756786286
|
omerbektass
| 2025-09-02T04:11:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:11:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ParadaFaruan/thai-qa-lab-model
|
ParadaFaruan
| 2025-09-02T04:11:34Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:01:42Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hanzogak/comradeshipXL
|
hanzogak
| 2025-09-02T04:11:31Z | 0 | 14 | null |
[
"stable-diffusion-xl",
"stable-diffusion",
"text-to-image",
"region:us"
] |
text-to-image
| 2023-12-25T18:14:43Z |
---
pipeline_tag: text-to-image
tags:
- stable-diffusion-xl
- stable-diffusion
---
Comradeship XL
=============
전우애 실시!
v7/v7A/v7PP recommend CFG - 6.5
v4BB/5A/6/6A/v12T1/v14T1S recommend CFG - 4~6
OLD Comradeship XL models recommend CFG - 2.5~3.5
## comradeshipXL-v14VWW
Comradeship XL v14VW3 × WAI-NSFW-illustrious-SDXL v15 Karcher-merge V-pred model
## comradeshipXL-v14KD2
This is a minor update model of comradeshipXL-v14KC.
## comradeshipXL-v14VW2 / v14VW3
A V-Pred merge model with slightly improved natural language prompt performance using Karcher-merge. (https://github.com/win10ogod/Karcher-merge)
## comradeshipXL-v14KC
The base SDXL model is a 1:1 mix of ILXL 2.0 and NoobAI EPS 1.1. This is a specialized model for high resolutions of 1.5K~1.75K.
## comradeshipXL-v14VX
As a model focused on improving the performance of high-resolution generation in NoobAI-based models, the supported resolutions for this model are not landscape, with a recommended 1536x1536 (1.5K) / up to 1796x1796 (1.75K)
## comradeshipXL-v14K1AX
Improved high-resolution generation performance based on v14K1A model.
## comradeshipXL-v14K1A
A variant of Illustrious XL 1.1 from Comradeship XL v14.
## comradeshipXL-v14K3
Merged noobai-XL EPS 1.1 while retaining the 1536x1536 high-resolution capabilities of Illustrious-XL 1.0.
## comradeshipXL-v17T3
Animagine XL 4.0 Opt + mann-e-004 LoRA = comradeshipXL-v17T3
## comradeshipXL-v17T1
Animagine XL 4.0 + mann-e-004 LoRA = comradeshipXL-v17T1
## comradeshipXL-v14K
Illustrious XL 1.0 version of Comradeship XL v14
## comradeshipXL-v14V
This is a minor update model of Comradeship XL v14T15V. Changed NoobAI XL V-Pred 0.75s to NoobAI XL V-Pred 1.0 in merge recipe for Comradeship XL v14T15V model. Since it is not a general SDXL (eps-prediction) model, it must be used in a v-prediction support program.
## comradeshipXL-v14VT
SDXL Hyper version of Comradeship XL v14V. Available in 6~15 steps / CFG 1.0 settings. Using the recommended settings, you can create images quickly.
## comradeshipXL-v14
NoobAI XL EPS 1.1 version of Comradeship XL v14V
## comradeshipXL-v14T15V
This is a minor update model of Comradeship XL v14T14V. In v14T14V, I changed the base model to NoobAI XL V-Pred 0.75s and merged the differences from CLIP-SAE-ViT-L-14 - CLIP-ViT-L-14. Since it is not a general SDXL (eps-prediction) model, it must be used in a v-prediction support program.
## comradeshipXL-v14T14V/V2/V3
This is a v-prediction model based on NoobAI XL V-Pred 0.5/0.6/0.65s
Since it is not a general SDXL (eps-prediction) model, it must be used in a v-prediction support program.
This model recommends using Euler CFG++ and CFG is recommended between 1 and 2.5. A minimum of 8 steps is required, and 12 to 14 steps are recommended.
## comradeshipXL-v14T13A/B
In the merge recipe for the Comradeship XL v14T13 model, I changed from the NoobAI-XL Early Access version to the NoobAI-XL 0.5/0.75 version.
## comradeshipXL-v14T1S
This model is based on Illustrious-XL v0.1.
Merge recipes can be found at this link. https://huggingface.co/hanzogak/comradeshipXL/tree/main/CSv14T1S-KIT
## comradeshipXL-v9MB4
Anime performance has been slightly improved.
Check the ComfyUI EXIF of the models for details on how to merge.
v9B2-Lite5 → v9M6K → v9MF4 → v9MB4
## comradeshipXL-v9MB
Hard NSFW performance has been improved.
Check the ComfyUI EXIF of the models for details on how to merge.
v9B2-Lite3 → v9M3 → v9M3K → v9MF → v9MB
## comradeshipXL-v9M
The v9M model is a hybrid Pony model that has the character/artist features of the anime SDXL model. Features may not be as clear compared to purebred models.
InterDiffusion-4.0 - SDXL 1.0 = InterDiffusion-4.0 LoRA
Kohaku-XL Epsilon rev1 - kohaku-xl-epsilon-lyco = dummy2
Kohaku-XL Epsilon rev3 - dummy2 = Epsilon_dummy2 LoRA
comradeshipXL-v9AA-SPO-DPO-Flash-MaPObeta + mapo-safe LoRA + mann-e-004 LoRA + (InterDiffusion-4.0 × -1) + (Epsilon_dummy2 LoRA × -1) + marine-animeXL-290001 LoRA = comradeshipXL-v9AD2-Lite
Kohaku-XL Epsilon rev3 + mann-e-004 LoRA + SPO LoCon + DPO LoRA + mapo-safe LoRA + mapo-beta LoRA = comradeshipXL-v10T1
comradeshipXL-v10T1 + marine-animeXL-290001 LoRA = comradeshipXL-v10T3
Neta Art XL v2 + mann-e-004 LoRA + SPO LoCon + DPO LoRA + mapo-safe LoRA + mapo-beta LoRA = comradeshipXL-v11T1
comradeshipXL-v10T3 × 0.33 + comradeshipXL-v11T1 × 0.33 + comradeshipXL-v9AD2-Lite × 0.34 = comradeshipXL-v9B2-Lite
comradeshipXL-v9B2-Lite - SDXL 1.0 = CSv9B2-Lite LoRA
CSv9B2-Lite × 0.4 Unet + AAAAutism AAAA Restart × 0.6 Unet + CSv9B2-Lite TE + CSv9B2-Lite LoRA × 0.35 = comradeshipXL-v9M
## comradeshipXL-v12T3-WDV03
CFG 4~6 / euler ascestral / beta
This model is a merge model based on wdv-test-v0.1 from [wdv-tests](https://huggingface.co/waifu-diffusion/wdv-tests).
[ModelSamplingWaifuDiffusionV](https://gist.github.com/neggles/ecb6327251a9e274428d07636c727eb9) is required. (ComfyUI ONLY!)
ArtiWaifu Diffusion - SDXL 1.0 = ArtiWaifu LoRA
wdv-test-v0.1 + mann-e-004 LoRA × 0.75 + ArtiWaifu LoRA × 0.6 = comradeshipXL-v12T3-WDV01
## comradeshipXL-v12T1-WDV01
CFG 4~6 / euler ascestral / beta
This model is a merge model based on wdv-test-v0.1 from [wdv-tests](https://huggingface.co/waifu-diffusion/wdv-tests).
[ModelSamplingWaifuDiffusionV](https://gist.github.com/neggles/ecb6327251a9e274428d07636c727eb9) is required. (ComfyUI ONLY!)
wdv-test-v0.1 - Animagine 3.1 = wdv01 LoRA (Unet only)
Mann-E_Dreams-0.0.4 - SDXL 1.0 = mann-e-004 LoRA (Unet only)
wdv-test-v0.1 + mann-e-004 LoRA × 0.75 + wdv01 LoRA × (-0.01) = comradeshipXL-v12T1-WDV01
## comradeshipXL-v9AA
In this version, mapo-beta LoRA has been additionally merged in v9A.
## comradeshipXL-v9A
This version merges SPO, DPO, and Flash's LoRA.
kivotos-xl-2.0 unet × 0.5 + autismmixSDXL_autismmixConfetti unet × 0.5 + kivotos-xl-2.0 Text Encoder = comradeshipXL-v7T10
comradeshipXL-v9T4 × 0.5 + comradeshipXL-v7T10 × 0.5 = comradeshipXL-v9T13
comradeshipXL-v9T13 + SPO LoCon+ DPO LoRA + Flash LoRA × 0.55 = comradeshipXL-v9A-SPO-DPO-Flash
## comradeshipXL-v9
It's a mix of ArtiWaifu, Animagine XL V3.1, and Pony. Haven't tested it, but I don't think LoRA will work.
comradeshipXL-v9T4 × 0.5 + comradeshipXL-v7A × 0.5 = comradeshipXL-v9
## comradeshipXL-v8T1 / v9T4
Comradeship XL Version 9 Test 4
Hassaku XL (Hentai) v 1.1 - Pony Diffusion V6 XL = marine-animeXL_h11pd6_locon
PD for Anime - Pony Diffusion V6 XL = marine-animeXL_pa2pd6_locon
autismmixSDXL autismmixConfett + marine-animeXL_pa2pd6_locon + marine-animeXL_h11pd6_locon = comradeshipXL-v8T1
ArtiWaifu Diffusion v1.0 unet × 0.7 + comradeshipXL-v8T1 unet × 0.3 + ArtiWaifu Diffusion v1.0 Text Encoder = comradeshipXL-v9T4
## comradeshipXL-v9T1
Comradeship XL Version 9 Test 1
ArtiWaifu Diffusion v1.0 unet × 0.7 + autismmixSDXL autismmixConfetti unet × 0.3 + ArtiWaifu Diffusion v1.0 Text Encoder = comradeshipXL-v9T1
## comradeshipXL-v7PP
This model is intended to combine the performance of pony_pencil-XL with support for Animagine XL V3.1 characters.
U-Net data from PD6 (Pony)-based models are merged. Therefore, some SDXL LoRAs may not work.
comradeshipXL-v7P unet × 0.5 + pony_pencil-XL v1.0.1 unet × 0.5 + comradeshipXL-v7P Text Encoder = comradeshipXL-v7PP
## comradeshipXL-v7A
The license of comradeshipXL-v7 is the same as Animagine XL V3.1 and Animagine XL V3.1 and Kohaku-XL Epsilon Rev1.
Pony Diffusion V6 XL - sd_xl_base_1.0 = marine-transXL-xl_to_pd6_locon
Animagine XL V3.1 - sd_xl_base_1.0 = marine-transXL-xl_to_am31_locon
comradeshipXL-v7P.json(Animagine XL V3.1, autismmixSDXL_autismmixConfetti, marine-transXL-xl_to_am31_locon, marine-transXL-xl_to_pd6_locon) = comradeshipXL-v7P
comradeshipXL-v7P unet * 0.7 + autismmixSDXL_autismmixConfetti unet * 0.3 + comradeshipXL-v7P Text Encoder = comradeshipXL-v7PT1
comradeshipXL-v7T5 unet * 0.55 + comradeshipXL-v7PT1 unet * 0.45 + comradeshipXL-v7T5 Text Encoder = comradeshipXL-v7PT2
comradeshipXL-v7PT2 + Kohaku-XL Epsilon Rev1 * comradeshipXL-v7A.json = comradeshipXL-v7A
## comradeshipXL-v7
The license of comradeshipXL-v7 is the same as Animagine XL V3.1 and Kohaku-XL Epsilon Rev1.
Animagine XL V3.1 + Kohaku-XL Epsilon Rev1 * comradeshipXL-v7T5.json = comradeshipXL-v7T5
comradeshipXL-v7T5 + marine-animeXL_v06v02m * 0.5 = comradeshipXL-v7
## comradeshipXL-v6 / v6A
The license of comradeshipXL-v6/v6A is the same as Animagine XL V3
Hassaku XL beta v0.6 - Hassaku XL beta 0.2M = marine-animeXL_v06v02m
Kohaku-XL Delta rev1 - Kohaku-XL Delta base = marine-animeXL-d1db
anima_pencil-XL v2.0.0 + (marine-animeXL-d1db × 0.65) + marine-animeXL_v06v02m = comradeshipXL-v6
Animagine XL V3.1 - Animagine XL V3.0 = marine-animeXL-3130
anima_pencil-XL v2.0.0 + marine-animeXL-3130 + marine-animeXL_v06v02m + (marine-animeXL-d1db × 0.6) = comradeshipXL-v6
## comradeshipXL-v5 Test / v5 Lite / v5A
The license of comradeshipXL-v5 Test / v5 Lite / v5A is the same as Pony Diffusion V6 XL.
[Pony Diffusion V6 XL](https://civitai.com/models/257749/pony-diffusion-v6-xl?modelVersionId=290640) + dpo-sdxl-text2image-v1-LoRA + marine-animeXL-310001 LoRA + marine-animeXL-b7b5 LoRA + marine-animeXL-b71b5 LoRA = comradeshipXL-v5Test-DPO
Pony Diffusion V6 XL + dpo-sdxl-text2image-v1-LoRA + [Smooth Anime Styles for Pony Diffusion V6 XL LoRA](https://civitai.com/models/264290?modelVersionId=298238) + [Anime Styles for Pony Diffusion V6 XL LoRA](https://civitai.com/models/264290?modelVersionId=298005) = comradeshipXL-v5Lite-DPO
Pony Diffusion V6 XL + dpo-sdxl-text2image-v1-LoRA + plastic-novel LoRA * 0.25 + plastic-1 LoRA * 0.25 + plastic-2 LoRA * 0.25 + plastic-3 LoRA * 0.25 = comradeshipXL-v5A-DPO
## comradeshipXL-v4 / v4B / v4BB
The license of comradeshipXL-v4/v4B/v4BB is the same as Animagine XL V3.
blue_pencil-XL v3.1.0 - blue_pencil-XL v0.0.1 = marine-animeXL-310001 LoRA
[Animagine XL V3](https://huggingface.co/cagliostrolab/animagine-xl-3.0) + dpo-sdxl-text2image-v1-LoRA + marine-animeXL-310001 LoRA + marine-animeXL-b7b5 LoRA + marine-animeXL-b71b5 LoRA + marine-animeXL-200121 LoRA = comradeshipXL-v4-DPO
v4B-DPO Changed Animagine XL V3 to [Animagine XL V3 Base](https://huggingface.co/cagliostrolab/animagine-xl-3.0-base)
[bulldozer_BETA](https://civitai.com/models/264323?modelVersionId=298024) - Animagine XL V3 = marine-animeXL-am3_to_ab1 LoRA
Animagine XL V3 Base + dpo-sdxl-text2image-v1-LoRA + marine-animeXL-310001 LoRA + marine-animeXL-b7b5 LoRA + marine-animeXL-b71b5 LoRA + marine-animeXL-200121 LoRA + marine-animeXL-am3_to_ab1 LoRA = comradeshipXL-v4BB-DPO
v4BB recommend CFG - 4~6
## comradeshipXL-v3C-DPO-310
Changed blue_pencil-XL v2.9.0 to v3.1.0
## comradeshipXL-v3C-DPO
[AnySomniumXL](https://civitai.com/models/228270/anysomniumxl) v2 - AnySomniumXL 1.2.1 = marine-animeXL-200121
[DucHaiten-Real3D-NSFW-XL](https://civitai.com/models/247266/duchaiten-real3d-nsfw-xl) v1.0 - DucHaiten-Real3D-NSFW-XL v0.16 = marine-beautyXL-100016 LoRA
Hassaku XL beta v 0.1 + dpo-sdxl-text2image-v1-LoRA + marine-animeXL-290001 LoRA + marine-animeXL-b7b5 LoRA + marine-animeXL-b71b5 LoRA + marine-animeXL-200121 LoRA = comradeshipXL-v3C-DPO
## comradeshipXL-v3B-DPO
[Kohaku-XL beta7.1(?)](https://civitai.com/models/162577?modelVersionId=203416) - Kohaku-XL beta5 = marine-animeXL-b71b5 LoRA
blue_pencil-XL v2.9.0 - blue_pencil-XL v0.0.1 = marine-animeXL-290001 LoRA
Hassaku XL beta v 0.1 + dpo-sdxl-text2image-v1-LoRA + marine-animeXL-290001 LoRA + marine-animeXL-b7b5 LoRA + marine-animeXL-b71b5 LoRA= comradeshipXL-v3-DPO
[Baguette-Anime XL v1.0](https://civitai.com/models/255395/baguette-anime-xl?modelVersionId=287909) - SDXL 1.0 = marine-transXL-xl_to_ba
comradeshipXL-v3-DPO + (marine-transXL-xl_to_ba * 0.6) = comradeshipXL-v3B-DPO
## comradeshipXL-v2-DPO-250p / 290 / 300
Instead of blue_pencil-XL v2.0.0...
* 250p - [blue_pencil-XL-v2.5.0-photo-style](https://huggingface.co/bluepen5805/blue_pencil-XL/blob/main/blue_pencil-XL-v2.5.0-photo-style.safetensors)
* 290 - [blue_pencil-XL v2.9.0](https://huggingface.co/bluepen5805/blue_pencil-XL/blob/main/blue_pencil-XL-v2.9.0.safetensors)
* 300 - [blue_pencil-XL 3.0.0](https://civitai.com/models/119012?modelVersionId=281771)
## comradeshipXL-v2-DPO
[Kohaku-XL beta7](https://civitai.com/models/162577?modelVersionId=192804) - [Kohaku-XL beta5](https://civitai.com/models/162577?modelVersionId=183131) = marine-animeXL-b7b5 LoRA
[blue_pencil-XL v2.0.0](https://civitai.com/models/119012?modelVersionId=245614) - [blue_pencil-XL v0.0.1](https://civitai.com/models/119012?modelVersionId=129169) = marine-animeXL-200001 LoRA
[Hassaku XL beta v 0.1](https://civitai.com/models/140272?modelVersionId=251889) + [dpo-sdxl-text2image-v1-LoRA](https://huggingface.co/hanzogak/dpo-sdxl-text2image-v1-LoRA) + marine-animeXL-200001 LoRA + marine-animeXL-b7b5 LoRA = comradeshipXL-v2-DPO
## comradeshipXL-v1-DPO
[SDXL Yamer's Anime Stage ÉNA](https://civitai.com/models/76489?modelVersionId=269502) - [SDXL Yamer's Anime V1](https://civitai.com/models/76489?modelVersionId=137609) = marine-animeXL-SNv1 LoRA
[Hassaku XL beta v 0.1](https://civitai.com/models/140272?modelVersionId=251889) + marine-animeXL-SNv1 LoRA + [dpo-sdxl-text2image-v1-LoRA](https://huggingface.co/hanzogak/dpo-sdxl-text2image-v1-LoRA) = comradeshipXL-v1-DPO
|
Plaifa/thai-qa-lab-model
|
Plaifa
| 2025-09-02T04:11:17Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:04:09Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bunyanatintajak/thai-qa-lab
|
bunyanatintajak
| 2025-09-02T04:11:11Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:02:02Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kunthida/thai-qa-lab
|
Kunthida
| 2025-09-02T04:10:57Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:04:01Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Witan7725/thai_qa_lab_model
|
Witan7725
| 2025-09-02T04:10:49Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"th",
"dataset:disease_3000",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-09-02T04:01:49Z |
---
datasets:
- disease_3000
language: th
license: mit
metrics:
- perplexity
model_name: Thai GPT-2 Fine-Tuned
tags:
- thai
- gpt2
- qa
- fine-tuned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรค 3000 คู่
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** th
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756786152
|
liukevin666
| 2025-09-02T04:10:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:10:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756786172
|
omerbkts
| 2025-09-02T04:09:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:09:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756786034
|
akirafudo
| 2025-09-02T04:07:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:07:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nick1880/blockassist-bc-barky_powerful_falcon_1756785824
|
nick1880
| 2025-09-02T04:04:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:04:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756785711
|
matherchodhuuu
| 2025-09-02T04:04:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:04:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756785583
|
xinnn32
| 2025-09-02T04:00:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T04:00:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756785575
|
omerbektass
| 2025-09-02T03:59:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:59:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756785505
|
liukevin666
| 2025-09-02T03:59:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:59:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kunjcr2/GatorGPT2
|
kunjcr2
| 2025-09-02T03:59:14Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gator-transformer",
"text-generation",
"decoder-only",
"nlp",
"autoregressive",
"rope",
"gqa",
"rmsnorm",
"swiglu",
"from-scratch",
"custom_code",
"en",
"dataset:roneneldan/TinyStories",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-02T02:57:28Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- decoder-only
- nlp
- autoregressive
- rope
- gqa
- rmsnorm
- swiglu
- from-scratch
datasets:
- roneneldan/TinyStories
license: apache-2.0
model-index:
- name: GatorGPT2
results: []
---
# 🐊 GatorGPT2
**GatorGPT2** is a small, decoder-only Transformer trained from scratch on a subset of **TinyStories** for next-token prediction.
It uses **RoPE** (rotary positional embeddings), **GQA** (grouped-query attention), **RMSNorm**, and a **SwiGLU MLP**.
Tokenizer is **tiktoken** with **p50k_base** vocabulary.
> **Repo**: `kunjcr2/GatorGPT2`
> **Intended use**: research, experimentation, educational demos for training/serving custom LMs
---
## 🔧 Architecture
- **Type**: Decoder-only, causal LM
- **Layers**: `num_hidden_layers = 10`
- **Hidden size**: `hidden_size = 448`
- **Heads**: `num_attention_heads = 8` (GQA with 2 KV heads per query group)
- **FFN**: SwiGLU, `d_ff ≈ 2× hidden_size`
- **Norm**: RMSNorm (pre-norm blocks)
- **Positional**: RoPE
- **Vocab**: `vocab_size = 50,257` (tiktoken p50k_base)
- **Context length**: `max_position_embeddings = 1024`
- **Weight tying**: output head tied with token embeddings
- **Files**:
- `pytorch_model.bin` (or `model.safetensors`)
- `config.json` (`model_type: "gator-transformer"`, `auto_map` provided)
- `modeling_gator.py`, `configuration_gator.py`, `__init__.py`
- `tokenizer_manifest.json` → `{ "library": "tiktoken", "encoding": "p50k_base" }`
> Custom code is loaded via `trust_remote_code=True`.
---
## 📦 Install
```bash
pip install torch transformers tiktoken
````
---
## 🚀 Quickstart (Transformers + tiktoken)
```python
import torch
from transformers import AutoModelForCausalLM
import tiktoken
MODEL_ID = "kunjcr2/GatorGPT2"
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# Load model (uses custom modeling code)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
trust_remote_code=True,
torch_dtype=torch.float32,
).to(DEVICE).eval()
# Tokenizer (p50k_base via tiktoken)
tok = tiktoken.get_encoding("p50k_base")
def generate_greedy(prompt: str, max_new_tokens: int = 64) -> str:
ids = tok.encode(prompt)
x = torch.tensor([ids], device=DEVICE)
for _ in range(max_new_tokens):
with torch.no_grad():
out = model(x)
logits = out["logits"] if isinstance(out, dict) else out.logits
next_id = int(torch.argmax(logits[0, -1]))
x = torch.cat([x, torch.tensor([[next_id]], device=DEVICE)], dim=1)
return tok.decode(x[0].tolist()).replace("<|endoftext|>", "").strip()
print(generate_greedy("Little girl was"))
```
### Temperature-only sampling (no top-k/p)
```python
def generate_temp(prompt, max_new_tokens=64, temperature=0.9):
ids = tok.encode(prompt)
x = torch.tensor([ids], device=DEVICE)
for _ in range(max_new_tokens):
with torch.no_grad():
logits = model(x).logits[0, -1] / max(temperature, 1e-6)
probs = torch.softmax(logits, dim=-1)
next_id = torch.multinomial(probs, 1).item()
x = torch.cat([x, torch.tensor([[next_id]], device=DEVICE)], dim=1)
return tok.decode(x[0].tolist()).replace("<|endoftext|>", "").strip()
```
---
## 🌐 Serving with vLLM (Optional)
```bash
python -m vllm.entrypoints.openai.api_server \
--model kunjcr2/GatorGPT2 \
--tokenizer kunjcr2/GatorGPT2 \
--trust-remote-code \
--dtype float32 \
--max-model-len 1024 \
--host 0.0.0.0 --port 8000
```
Call it:
```bash
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{"model":"kunjcr2/GatorGPT2","prompt":"Little girl was","max_tokens":64,"temperature":0.9}'
```
---
## 🧪 Training Summary
* **Data**: `roneneldan/TinyStories` (train split; subset of \~1.5M stories)
* **Objective**: causal LM (next-token prediction), cross-entropy
* **Optimizer**: AdamW (`lr=3e-4`, `weight_decay=0.01`, `eps=1e-8`)
* **Precision**: bf16 autocast on CUDA during forward for speed
* **Batching**: sliding windows via a `FastDataset` (window size e.g. 512, stride 256)
* **Eval**: periodic validation over fixed batches; train loss downsampled to eval steps for plotting
* **Hardware**: intended for A100-class GPUs; also runs on CPU for debug (slow)
> This is a *from-scratch* toy/educational model; quality depends heavily on steps, data cleaned, and schedule. Expect simple, short English generations.
---
## ✅ Intended Use
* Research on small decoder-only Transformers
* Educational demos (training, saving, model hub, vLLM serving)
* Baseline for experimenting with:
* LoRA/QLoRA, quantization, distillation
* Attention variants (Flash-Attention, GQA configs)
* Data curation and scaling laws
**Not** intended for production or safety-critical use.
---
## ⚠️ Limitations & Risks
* Trained on children’s story data ⇒ limited world knowledge & reasoning
* May output incoherent, repetitive, or undesirable text
* No instruction-tuning or RLHF
* Tokenizer is `tiktoken p50k_base` (not a standard HF tokenizer), so examples use `tiktoken` directly
---
## 📁 Repo Structure
```
.
├── config.json
├── pytorch_model.bin # or model.safetensors
├── modeling_gator.py # custom architecture (RoPE, GQA, RMSNorm, SwiGLU)
├── configuration_gator.py
├── __init__.py
└── tokenizer_manifest.json # { "library": "tiktoken", "encoding": "p50k_base" }
```
`config.json` includes:
```json
{
"model_type": "gator-transformer",
"architectures": ["GatorModel"],
"auto_map": {
"AutoConfig": "configuration_gator.GatorConfig",
"AutoModelForCausalLM": "modeling_gator.GatorModel"
}
}
```
---
## 📊 Evaluation
No formal benchmarks reported. You can compute loss/perplexity on your own validation subset:
```python
import math, torch
from torch.utils.data import DataLoader, TensorDataset
# ...build a DataLoader of (input_ids, target_ids) pairs...
def eval_loss(model, loader, device="cuda"):
model.eval(); total, n = 0.0, 0
with torch.no_grad():
for x, y in loader:
x, y = x.to(device), y.to(device)
logits = model(x).logits
loss = torch.nn.functional.cross_entropy(
logits.view(-1, logits.size(-1)), y.view(-1)
)
total += loss.item(); n += 1
return total / max(n,1)
val_loss = eval_loss(model, your_val_loader)
print("val loss:", val_loss, " ppl:", math.exp(val_loss))
```
---
## 📜 License
**apache-2.0**
---
## 🙌 Acknowledgements
* **TinyStories** dataset by Ronen Eldan et al. (`roneneldan/TinyStories`)
* Community tooling: **PyTorch**, **🤗 Transformers**, **tiktoken**, **vLLM**
---
## ✉️ Citation
If you use this model, please cite this repository:
```bibtex
@software{GatorGPT2_2025,
author = {Kunj},
title = {GatorGPT2: a small decoder-only Transformer with RoPE+GQA},
year = {2025},
url = {https://huggingface.co/kunjcr2/GatorGPT2}
}
```
|
Tato-21/RL_Unit1
|
Tato-21
| 2025-09-02T03:54:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T03:54:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.09 +/- 11.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Soughing/gla_xxl
|
Soughing
| 2025-09-02T03:52:37Z | 0 | 0 | null |
[
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T07:15:56Z |
---
license: apache-2.0
---
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756785033
|
fakir22
| 2025-09-02T03:51:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:51:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Saad2226/blockassist-bc-pudgy_sly_badger_1756784429
|
Saad2226
| 2025-09-02T03:50:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy sly badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:50:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy sly badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiopuiter/blockassist-bc-prickly_hulking_sandpiper_1756784814
|
tiopuiter
| 2025-09-02T03:47:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly hulking sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:46:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly hulking sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756784774
|
omerbektass
| 2025-09-02T03:47:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:46:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756784669
|
fakir22
| 2025-09-02T03:45:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:45:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nick1880/blockassist-bc-barky_powerful_falcon_1756784443
|
nick1880
| 2025-09-02T03:41:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:41:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756784395
|
omerbektass
| 2025-09-02T03:40:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:40:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1756782850
|
chainway9
| 2025-09-02T03:40:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:40:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/DeepSeek-GRM-16B-GGUF
|
mradermacher
| 2025-09-02T03:39:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"dataset:openbmb/UltraFeedback",
"dataset:NCSOFT/offsetbias",
"dataset:Skywork/Skywork-Reward-Preference-80K-v0.2",
"dataset:nvidia/HelpSteer2",
"base_model:BBQGOD/DeepSeek-GRM-16B",
"base_model:quantized:BBQGOD/DeepSeek-GRM-16B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T00:03:56Z |
---
base_model: BBQGOD/DeepSeek-GRM-16B
datasets:
- openbmb/UltraFeedback
- NCSOFT/offsetbias
- Skywork/Skywork-Reward-Preference-80K-v0.2
- nvidia/HelpSteer2
language:
- zh
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/BBQGOD/DeepSeek-GRM-16B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-GRM-16B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-GRM-16B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q2_K.gguf) | Q2_K | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_S.gguf) | Q3_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_M.gguf) | Q3_K_M | 8.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_L.gguf) | Q3_K_L | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.IQ4_XS.gguf) | IQ4_XS | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q4_K_S.gguf) | Q4_K_S | 9.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q4_K_M.gguf) | Q4_K_M | 10.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q5_K_S.gguf) | Q5_K_S | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q5_K_M.gguf) | Q5_K_M | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q6_K.gguf) | Q6_K | 14.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q8_0.gguf) | Q8_0 | 16.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
david3621/blockassist-bc-gentle_meek_cat_1756782911
|
david3621
| 2025-09-02T03:37:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:30:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756784107
|
akirafudo
| 2025-09-02T03:36:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:35:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
underscore2/llama3-8b-bluesky-engagement-kto
|
underscore2
| 2025-09-02T03:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-07-25T21:34:36Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** underscore2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ChenWu98/numina_qwen_2.5_sft_combine_v1_source_split_0
|
ChenWu98
| 2025-09-02T03:34:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T03:33:04Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v1_source_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v1_source_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/pyqm8q99)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756783970
|
omerbektass
| 2025-09-02T03:33:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:33:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756781425
|
acidjp
| 2025-09-02T03:28:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:28:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756783646
|
amandacute
| 2025-09-02T03:28:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:27:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.