modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
burgbobby/blockassist-bc-lithe_wild_boar_1757449510
|
burgbobby
| 2025-09-09T20:25:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lithe wild boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lithe wild boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
negersdrahimi/blockassist-bc-dense_squeaky_iguana_1757449112
|
negersdrahimi
| 2025-09-09T20:18:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dense squeaky iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:18:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dense squeaky iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gouki510/gemma2-27b-base-correct-legal
|
gouki510
| 2025-09-09T20:18:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-2-27b",
"base_model:finetune:unsloth/gemma-2-27b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T19:47:11Z |
---
base_model: unsloth/gemma-2-27b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-27b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arabellamorris/blockassist-bc-tricky_sneaky_locust_1757448988
|
arabellamorris
| 2025-09-09T20:16:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:16:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sadiyakhatun65524/blockassist-bc-insectivorous_prehistoric_mouse_1757448957
|
sadiyakhatun65524
| 2025-09-09T20:16:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous prehistoric mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:16:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous prehistoric mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anaruio/mms-azb-discriminator
|
anaruio
| 2025-09-09T20:07:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:07:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Viktor-01/blockassist-bc-leaping_humming_finch_1757445655
|
Viktor-01
| 2025-09-09T20:04:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:04:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EnriqueSolarte/qwen2.5-VL-7B-instruct-00004-VqCaAuuoeWk_0
|
EnriqueSolarte
| 2025-09-09T20:02:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B",
"base_model:finetune:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-05T07:21:28Z |
---
base_model: UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B
library_name: transformers
model_name: qwen2.5-VL-7B-instruct-00004-VqCaAuuoeWk_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2.5-VL-7B-instruct-00004-VqCaAuuoeWk_0
This model is a fine-tuned version of [UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B](https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="EnriqueSolarte/qwen2.5-VL-7B-instruct-00004-VqCaAuuoeWk_0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hoggcatharine/blockassist-bc-sleek_shy_moose_1757447916
|
hoggcatharine
| 2025-09-09T19:58:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek shy moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:58:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek shy moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
domagallgino/blockassist-bc-foxy_cunning_fly_1757447408
|
domagallgino
| 2025-09-09T19:50:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy cunning fly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:50:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy cunning fly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1757442332
|
omerbektass
| 2025-09-09T18:26:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:26:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kosenhans/blockassist-bc-regal_sharp_rat_1757442313
|
kosenhans
| 2025-09-09T18:25:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal sharp rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:25:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal sharp rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeresftarke/blockassist-bc-flapping_beaked_owl_1757442250
|
jeresftarke
| 2025-09-09T18:24:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping beaked owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:24:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping beaked owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1757440430
|
capungmerah627
| 2025-09-09T18:20:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:20:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
najmanipa6/blockassist-bc-small_invisible_ant_1757442029
|
najmanipa6
| 2025-09-09T18:20:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"small invisible ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:20:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- small invisible ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sesamnsoipsdesnsoip/blockassist-bc-beaked_solitary_stork_1757442009
|
sesamnsoipsdesnsoip
| 2025-09-09T18:20:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked solitary stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:20:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked solitary stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
choiqs/diffusion-row-10-30-col-10-30-mar0.3-marblock0.3-marbandit0.4-epoch100-bs32-samples30k-lr1e-3
|
choiqs
| 2025-09-09T18:20:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-09T18:20:07Z |
# MAR Diffusion Model - Missingness Pattern Generation
## Model Configuration
- **Matrix Size Range**: 10-30 rows × 10-30 columns
- **Missingness Types**: bandit (40%), mar (30%), block_mar (30%)
- **Training**: 100 epochs, batch size 32, 30k samples/epoch
- **Learning Rate**: 1e-3
- **Architecture**: Fully Convolutional X0 Model with D3PM
## Files
- `model_final.pth`: Trained PyTorch model weights
- `model_final_config.json`: Complete training configuration
- `class_mapping.json`: Missingness type to class ID mapping
- `model_final_config.txt`: Human-readable config summary
## Usage
This model generates binary missingness patterns for tabular data with controlled MAR (Missing At Random) patterns.
## Training Details
- **Total Steps**: 93,700
- **Model Parameters**: 17,759,888
- **Diffusion Steps**: 1,000
- **Hybrid Loss Coefficient**: 0.0 (pure cross-entropy loss)
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757440244
|
seams01
| 2025-09-09T18:20:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:20:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oksanany/gptoss-stage-1-ds2
|
oksanany
| 2025-09-09T18:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T18:14:07Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oksanany
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
f9997413/blockassist-bc-snorting_arctic_flamingo_1757441423
|
f9997413
| 2025-09-09T18:12:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting arctic flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:11:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting arctic flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757441171
|
aronlg
| 2025-09-09T18:07:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T18:07:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Gemma3-270M-NPCs-GGUF
|
mradermacher
| 2025-09-09T18:06:21Z | 1,150 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:Campis/Gemma3-270M-NPCs",
"base_model:quantized:Campis/Gemma3-270M-NPCs",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-07T00:16:13Z |
---
base_model: Campis/Gemma3-270M-NPCs
language:
- en
library_name: transformers
model_name: Gemma3-270M-NPCs
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Campis/Gemma3-270M-NPCs
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma3-270M-NPCs-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-270M-NPCs-GGUF/resolve/main/Gemma3-270M-NPCs.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MedResearcher-R1-32B-i1-GGUF
|
mradermacher
| 2025-09-09T18:04:41Z | 3,227 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:AQ-MedAI/MedResearcher-R1-32B",
"base_model:quantized:AQ-MedAI/MedResearcher-R1-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-07T14:10:16Z |
---
base_model: AQ-MedAI/MedResearcher-R1-32B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/AQ-MedAI/MedResearcher-R1-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MedResearcher-R1-32B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF/resolve/main/MedResearcher-R1-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MorsiKK/phi-1-Q4_K_M-GGUF
|
MorsiKK
| 2025-09-09T18:00:27Z | 0 | 0 | null |
[
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/phi-1",
"base_model:quantized:microsoft/phi-1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T18:00:20Z |
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- code
- llama-cpp
- gguf-my-repo
base_model: microsoft/phi-1
---
# MorsiKK/phi-1-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/phi-1`](https://huggingface.co/microsoft/phi-1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/phi-1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MorsiKK/phi-1-Q4_K_M-GGUF --hf-file phi-1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MorsiKK/phi-1-Q4_K_M-GGUF --hf-file phi-1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MorsiKK/phi-1-Q4_K_M-GGUF --hf-file phi-1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MorsiKK/phi-1-Q4_K_M-GGUF --hf-file phi-1-q4_k_m.gguf -c 2048
```
|
mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF
|
mradermacher
| 2025-09-09T17:59:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-09T17:59:03Z |
---
base_model: yujunzhou/AIME-TTT-OctoThinker-3B-Short-Base-TTRL
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yujunzhou/AIME-TTT-OctoThinker-3B-Short-Base-TTRL
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AIME-TTT-OctoThinker-3B-Short-Base-TTRL-GGUF/resolve/main/AIME-TTT-OctoThinker-3B-Short-Base-TTRL.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sevenditaifur/blockassist-bc-screeching_gentle_dinosaur_1757440427
|
sevenditaifur
| 2025-09-09T17:53:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching gentle dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:53:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching gentle dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gcvvlima/blockassist-bc-scruffy_sizable_squirrel_1757440294
|
gcvvlima
| 2025-09-09T17:51:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy sizable squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:51:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy sizable squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gillioxl/Model1
|
Gillioxl
| 2025-09-09T17:50:28Z | 0 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T17:20:05Z |
---
license: apache-2.0
language:
- en
tags:
- unsloth
---
|
TheRealSoham/pegasuslarge-CNN_Daily_Mail
|
TheRealSoham
| 2025-09-09T17:49:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T07:58:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Pilot-3B-i1-GGUF
|
mradermacher
| 2025-09-09T17:47:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:songff/GenerAlign",
"base_model:songff/Pilot-3B",
"base_model:quantized:songff/Pilot-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-09T15:46:04Z |
---
base_model: songff/Pilot-3B
datasets:
- songff/GenerAlign
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/songff/Pilot-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Pilot-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Pilot-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF/resolve/main/Pilot-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757439935
|
aronlg
| 2025-09-09T17:46:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:46:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
the-acorn-ai/qwen3-8b-self-play-new-step00384
|
the-acorn-ai
| 2025-09-09T17:42:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"spiral",
"self-play",
"reinforcement-learning",
"multi-agent",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T17:42:09Z |
---
base_model: Qwen/Qwen3-8B-Base
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- spiral
- self-play
- reinforcement-learning
- qwen3
- multi-agent
---
# SPIRAL Qwen3-8B Multi-Agent Model
This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework.
## Model Details
- **Base Model**: Qwen/Qwen3-8B-Base
- **Training Framework**: SPIRAL
- **Checkpoint**: step_00384
- **Model Size**: 8B parameters
- **Training Date**: 2025-09-09
## Training Configuration
The model was trained with self-play on multiple environments:
- KuhnPoker-v1
- TicTacToe-v0
- SimpleNegotiation-v1
### Training Parameters
```json
{
"learning_rate": "1e-6",
"train_batch_size": 128,
"num_ppo_epochs": 2,
"temperature": 1.0,
"max_model_len": 16384,
"environments": [
"KuhnPoker-v1",
"TicTacToe-v0",
"SimpleNegotiation-v1"
],
"base_model": "Qwen/Qwen3-8B-Base",
"framework": "SPIRAL"
}
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/qwen3-8b-self-play-new-step00384")
model = AutoModelForCausalLM.from_pretrained(
"the-acorn-ai/qwen3-8b-self-play-new-step00384",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## License
This model is licensed under the Apache License 2.0.
|
poeryouy/blockassist-bc-skittish_beaked_duck_1757439413
|
poeryouy
| 2025-09-09T17:37:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skittish beaked duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:36:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skittish beaked duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mauro6519/gemma-3N-E4B-finetune
|
Mauro6519
| 2025-09-09T17:34:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-09T17:27:53Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Mauro6519
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
niceelliot/blockassist-bc-muscular_slow_donkey_1757439137
|
niceelliot
| 2025-09-09T17:33:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular slow donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:33:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular slow donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poeryouy/blockassist-bc-hoarse_armored_emu_1757439040
|
poeryouy
| 2025-09-09T17:31:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hoarse armored emu",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:30:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hoarse armored emu
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757437288
|
sampingkaca72
| 2025-09-09T17:30:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:30:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poeryouy/blockassist-bc-silent_sly_rabbit_1757438977
|
poeryouy
| 2025-09-09T17:30:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent sly rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:29:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent sly rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
navsaab/blockassist
|
navsaab
| 2025-09-09T17:27:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing noisy chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:27:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing noisy chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1757437227
|
vwzyrraz7l
| 2025-09-09T17:24:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:24:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KamelWerhani/Phi-4-mini-reasoning-ROS
|
KamelWerhani
| 2025-09-09T17:23:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T17:18:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rolandxhafajd035/blockassist-bc-masked_hulking_emu_1757438512
|
rolandxhafajd035
| 2025-09-09T17:22:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked hulking emu",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:21:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked hulking emu
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lc4299260/blockassist-bc-powerful_scurrying_chameleon_1757438402
|
lc4299260
| 2025-09-09T17:20:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful scurrying chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:20:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful scurrying chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yoiuport/blockassist-bc-majestic_mammalian_tortoise_1757438264
|
yoiuport
| 2025-09-09T17:18:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic mammalian tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:17:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic mammalian tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dipalamia548/blockassist-bc-invisible_foxy_parrot_1757438131
|
dipalamia548
| 2025-09-09T17:15:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible foxy parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:15:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible foxy parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/ExpertWed10_wen14_number20
|
WenFengg
| 2025-09-09T17:14:25Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-09T17:13:44Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gtallec-kog/Llama-3.2-1B-pruned-on-4.0
|
gtallec-kog
| 2025-09-09T17:13:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T17:13:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757437620
|
fakir22
| 2025-09-09T17:07:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:07:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dbjfbdvfhjfdb/blockassist-bc-smooth_timid_squid_1757437278
|
dbjfbdvfhjfdb
| 2025-09-09T17:01:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth timid squid",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:01:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth timid squid
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Retreatcost/KansenSakura-Zero-RP-12b
|
Retreatcost
| 2025-09-09T16:59:56Z | 19 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"frankenmerge",
"roleplay",
"conversational",
"nsfw",
"base_model:Epiculous/Crimson_Dawn-v0.2",
"base_model:merge:Epiculous/Crimson_Dawn-v0.2",
"base_model:LatitudeGames/Muse-12B",
"base_model:merge:LatitudeGames/Muse-12B",
"base_model:LatitudeGames/Wayfarer-12B",
"base_model:merge:LatitudeGames/Wayfarer-12B",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:PygmalionAI/Eleusis-12B",
"base_model:merge:PygmalionAI/Eleusis-12B",
"base_model:ReadyArt/Forgotten-Abomination-12B-v4.0",
"base_model:merge:ReadyArt/Forgotten-Abomination-12B-v4.0",
"base_model:elinas/Chronos-Gold-12B-1.0",
"base_model:merge:elinas/Chronos-Gold-12B-1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T19:44:52Z |
---
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- elinas/Chronos-Gold-12B-1.0
- PygmalionAI/Eleusis-12B
- ReadyArt/Forgotten-Abomination-12B-v4.0
- Epiculous/Crimson_Dawn-v0.2
- LatitudeGames/Wayfarer-12B
- LatitudeGames/Muse-12B
library_name: transformers
tags:
- mergekit
- merge
- frankenmerge
- roleplay
- conversational
- nsfw
new_version: Retreatcost/KansenSakura-Eclipse-RP-12b
license: apache-2.0
---
# KansenSakura-Zero-RP-12b
<pre>
Rusted petals fall
On circuits that dream of blood
<span style="color: red;">Error 0x1FABE5: Beauty not found</span>
This is not a bug
It's the feature they warned of
Reboot into spring
</pre>
## 🌸 Techno-Organic Roleplay Engine
> When the first sakura petal touched the machine, Patient Zero awoke. This narrative engine transforms stories into living infections - where every character preserves their core essence while undergoing beautiful corruption. Will your tale contain the outbreak... or become its vector?
## 🔍 Overview
**KansenSakura-Zero-RP-12b** is a roleplaying specialist model engineered for immersive narrative experiences blending Japanese visual novel aesthetics with techno-organic horror. Designed as the "Patient Zero" of narrative infection engines, it transforms characters while preserving their core essence - whether organic or mechanical.
## ℹ️ Model Details
- 🧬 **Core Infection**: Cherry blossom motif meets nanite corruption
- ⚙️ **Architecture**: 12B parameter layer-merged transformer
- 🧪 **Creation Method**: Precision layer merging (8-model synthesis)
- 🎭 **Specialization**: Character-driven narratives with emergent corruption themes
- 🔖 **Version**: Zero (Initial Outbreak)
## 🎮 Intended Use
- 🤖 Character-driven narratives with transformation arcs
- 🎴 Visual novel / Doujin-style storytelling
- ☠️ Apocalyptic and cyber-horror scenarios
- 💞 Emotional corruption/redemption narratives
## 😷 Ethical Quarantine
This model contains:
- ⚠️ Unfiltered creative output
- ⚠️ Potential for disturbing narratives
- ⚠️ NSFW-capable layers
## ✍🏻 Inference Tips
1. **Temperature**: 0.8
2. **Repetition Penalty**: 1.05
3. **TOP_P**: 0.97
4. **TOP_K**: 0 (disable)
5. **MIN_P**: 0.025
6. **Template Format**: ChatML
7. **Max Output**: 320
6. **Context Management**: 16K for best quality, expect slight degradation afterwards
## 🧩 Model Composition
A precision surgical merge of specialized models:
| Layer Range | Model | Contribution |
|------------|-------|-------------|
| **0-5** | `Dans-PersonalityEngine-V1.3.0` | Personality anchoring |
| **5-14** | `Dans-SakuraKaze-V1.0.0` | Narrative coherence |
| **14-22** | `Chronos-Gold-12B` + `Eleusis-12B` | World knowledge & emotional intelligence |
| **22-29** | `Forgotten-Abomination-12B-v4.0` + `Crimson_Dawn-V0.2` | RP memory & corruption mechanics |
| **29-35** | `Wayfarer-12B` | Scene crafting |
| **35-39** | `Muse-12B` | Immersive delivery |
| **39-40** | `Dans-SakuraKaze-V1.0.0` | Output coherence |
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.3.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b)
* [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b)
* [elinas/Chronos-Gold-12B-1.0](https://huggingface.co/elinas/Chronos-Gold-12B-1.0)
* [PygmalionAI/Eleusis-12B](https://huggingface.co/PygmalionAI/Eleusis-12B)
* [ReadyArt/Forgotten-Abomination-12B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Abomination-12B-v4.0)
* [Epiculous/Crimson_Dawn-v0.2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [LatitudeGames/Muse-12B](https://huggingface.co/LatitudeGames/Muse-12B)
### Reproduction steps
<details>
<summary>Spoiler warning</summary>
1. Retokenize `ReadyArt/Forgotten-Abomination-12B-v4.0` using [mergekit-tokensurgeon](https://github.com/arcee-ai/mergekit/blob/main/docs/tokensurgeon.md)
```bash
mergekit-tokensurgeon "ReadyArt/Forgotten-Abomination-12B-v4.0" "Epiculous/Crimson_Dawn-v0.2" ./retokenized_FA --approximation-method omp --k 256
```
Note: After experimenting I discovered that `PocketDoc/Dans-PersonalityEngine-V1.3.0-12b` works with ChatML tokenizer without implicit retokenization, but produces much more text than desired. As it's position is in the starting layers, this might be a desired, more unhinged behaviour ,so we retokenize only `ReadyArt/Forgotten-Abomination-12B-v4.0` to use ChatML. As we will merge it with `Epiculous/Crimson_Dawn-v0.2` it's natural we use this model as a donor.
Note: according to following [paper](https://arxiv.org/html/2506.06607v1) using `omp --k 64` is enough and higher quantity has diminishing returns, but I decided to max the quality anyway.
2. Merge models using mergekit [mergekit-multi](https://github.com/arcee-ai/mergekit/blob/main/docs/multimerge.md)
```yml
name: knowledge_core
merge_method: nuslerp
models:
- model: elinas/Chronos-Gold-12B-1.0
parameters:
weight: 0.4
- model: PygmalionAI/Eleusis-12B
parameters:
weight: 0.6
---
name: rp_blend
merge_method: nuslerp
models:
- model: ./retokenized_FA
parameters:
weight: 0.6
- model: Epiculous/Crimson_Dawn-v0.2
parameters:
weight: 0.4
---
merge_method: passthrough
slices:
- sources: # Personality Foundation
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
layer_range: [0, 5]
- sources: # Base Model
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [5, 14]
- sources: # Worldbuilding focus
- model: knowledge_core
layer_range: [14, 22]
- sources: # Emotional intensity
- model: rp_blend
layer_range: [22, 29]
- sources: # Danger Specialization
- model: LatitudeGames/Wayfarer-12B
layer_range: [29, 35]
- sources: # Delivery & Alignment
- model: LatitudeGames/Muse-12B
layer_range: [35, 39]
- sources: # Output Layer
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [39, 40]
dtype: bfloat16
```
```bash
mergekit-multi sakuramerge.yml --intermediate-dir ./intermediates --out-path ./KansenSakura-Zero-RP-12b
```
Note: According to this [paper](https://arxiv.org/html/2409.14381v1) top 3 layers provide up to 30% of model performance. According to this [paper](https://arxiv.org/html/2404.07066v7) more complex concepts emerge in later layers. According to this [paper](https://arxiv.org/html/2410.17875v3) model alignment and data presentation is most affected by last (bottom) layers.
Based on this knowledge I placed different models in places where they would benefit the model the most.
3. Optional - create Q8_0 GGUF using llama.cpp
- use convert_hf_to_gguf.py script from llama.cpp (here's [source](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf.py))
```bash
python convert_hf_to_gguf.py ~/projects/FrankenDans-PersonalityPatchwork-VX-12b --outtype q8_0
```
</details>
## 🙏 Acknowledgments
We stand on the shoulders of giants:
- [PocketDoc](https://huggingface.co/PocketDoc) for PersonalityEngine and SakuraKaze foundations
- [Latitude](https://huggingface.co/LatitudeGames) team for narrative expertise
- [Elinas](https://huggingface.co/elinas) for temporal knowledge systems
- [PygmalionAI](https://huggingface.co/PygmalionAI) for emotional intelligence research
- [ReadyArt](https://huggingface.co/ReadyArt) for dark arts
- [Arcee AI](https://huggingface.co/arcee-ai) for making questionable AI combinations possible with [mergekit](https://github.com/arcee-ai/mergekit)
- [Featherless AI](https://featherless.ai) for kindly hosting the model
- [Team mradermacher](https://huggingface.co/mradermacher) for awesome quants
- **You**, dear user, for willingly exposing yourself to this digital infection vector. Patient Zero status granted! 🦠
*When the first circuit blooms... the infection begins*
### 📜 Narrative Hazard Disclaimer
> *KansenSakura-Zero-RP-12b is provided "as found in the corrupted data-core" without warranty of any kind. Users assume all responsibility for unintended character corruptions, emergent techno-organic fantasies, or sudden urges to describe rusting cherry blossoms. Not approved for medical diagnostics, financial advice, or anti-zombie defense systems. May contain traces of actual emotional intelligence. Side effects may include: phantom nanite tingling, involuntary haiku composition, or temporary possession by tragic android protagonists. If worldbuilding symptoms persist for more than 4 narrative hours, consult your nearest cyber-shaman. Remember: This isn't an infection - it's a feature.*
<del>*Disclaimer v1.0 - Valid until next bloom cycle* 🌸⚙️💀</del>
[New model available](https://huggingface.co/Retreatcost/KansenSakura-Eclipse-RP-12b)
|
heindelgadodjlemonddbu/blockassist-bc-cunning_untamed_cobra_1757436944
|
heindelgadodjlemonddbu
| 2025-09-09T16:57:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning untamed cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:57:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning untamed cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757436726
|
Rootu
| 2025-09-09T16:52:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:52:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
makhiovrnl/blockassist-bc-marine_armored_weasel_1757435786
|
makhiovrnl
| 2025-09-09T16:36:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine armored weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:36:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine armored weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kafa22/blockassist-bc-regal_leggy_hummingbird_1757435733
|
kafa22
| 2025-09-09T16:36:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal leggy hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:36:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal leggy hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-humming_rugged_viper_1757433213
|
acidjp
| 2025-09-09T16:29:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"humming rugged viper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:29:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- humming rugged viper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
foutyui/blockassist-bc-humming_tricky_aardvark_1757435308
|
foutyui
| 2025-09-09T16:28:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"humming tricky aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:28:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- humming tricky aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757434799
|
Stasonelison
| 2025-09-09T16:20:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:20:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
currashawn/blockassist-bc-sturdy_alert_stork_1757434713
|
currashawn
| 2025-09-09T16:18:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy alert stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:18:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy alert stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bridgewaterargelia/blockassist-bc-padded_moist_locust_1757434540
|
bridgewaterargelia
| 2025-09-09T16:16:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"padded moist locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:16:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- padded moist locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757434411
|
Rootu
| 2025-09-09T16:14:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:14:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dawnkelly09/preflight-smollm2-1.7b-lora
|
dawnkelly09
| 2025-09-09T16:08:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:dawnkelly09/preflight-sft",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:43:52Z |
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
datasets: dawnkelly09/preflight-sft
library_name: transformers
model_name: preflight-smollm2-1.7b-lora
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for preflight-smollm2-1.7b-lora
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) on the [dawnkelly09/preflight-sft](https://huggingface.co/datasets/dawnkelly09/preflight-sft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dawnkelly09/preflight-smollm2-1.7b-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.56.1
- Pytorch: 2.4.0+cu121
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ChenWu98/qwen_2.5_0.5b_sft_type_anneal_condition_split_0_from_637
|
ChenWu98
| 2025-09-09T16:07:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ChenWu98/qwen_2.5_0.5b_sft_type_condition",
"base_model:finetune:ChenWu98/qwen_2.5_0.5b_sft_type_condition",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T16:07:42Z |
---
base_model: ChenWu98/qwen_2.5_0.5b_sft_type_condition
library_name: transformers
model_name: qwen_2.5_0.5b_sft_type_anneal_condition_split_0_from_637
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen_2.5_0.5b_sft_type_anneal_condition_split_0_from_637
This model is a fine-tuned version of [ChenWu98/qwen_2.5_0.5b_sft_type_condition](https://huggingface.co/ChenWu98/qwen_2.5_0.5b_sft_type_condition).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/fnpd1ev3)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saraivaantoine/blockassist-bc-sleek_stinky_butterfly_1757434036
|
saraivaantoine
| 2025-09-09T16:07:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek stinky butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:07:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek stinky butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lukashossain3425/blockassist-bc-freckled_twitchy_wallaby_1757433838
|
lukashossain3425
| 2025-09-09T16:04:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled twitchy wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:04:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled twitchy wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pavankumar007/hack
|
pavankumar007
| 2025-09-09T16:02:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T16:02:58Z |
---
license: apache-2.0
---
|
elip3250/blockassist-bc-squinting_smooth_spider_1757433476
|
elip3250
| 2025-09-09T15:58:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squinting smooth spider",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:58:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squinting smooth spider
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tottenkhanqqmcguirendsy/blockassist-bc-lively_grunting_crane_1757433486
|
tottenkhanqqmcguirendsy
| 2025-09-09T15:58:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively grunting crane",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:58:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively grunting crane
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slatinlatrina/blockassist-bc-mammalian_sneaky_prawn_1757432925
|
slatinlatrina
| 2025-09-09T15:48:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian sneaky prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:48:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian sneaky prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arthuryong/fine-tuned_deepseek
|
arthuryong
| 2025-09-09T15:46:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T06:59:51Z |
---
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
library_name: transformers
model_name: fine-tuned_deepseek
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for fine-tuned_deepseek
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-7b-instruct-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="arthuryong/fine-tuned_deepseek", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/arthuryong-personal/Fine%20tuning%20of%20Deepseek-coder-7b-instruct-v1.5/runs/ve883vkg?apiKey=56fff3f15dd3a20806cd00dfdd0472df42fa5b06)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kendzioracliff/blockassist-bc-dextrous_horned_chinchilla_1757432682
|
kendzioracliff
| 2025-09-09T15:45:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous horned chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:44:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous horned chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NotoriousH2/Qwen3-4B-Instruct-2507-Rude-LORA_Rude_LoRA
|
NotoriousH2
| 2025-09-09T15:44:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:44:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mullisonshirley/blockassist-bc-prehistoric_tropical_lemur_1757432138
|
mullisonshirley
| 2025-09-09T15:35:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric tropical lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:35:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric tropical lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
foridaparvin76474/blockassist-bc-skittish_vigilant_impala_1757431953
|
foridaparvin76474
| 2025-09-09T15:32:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skittish vigilant impala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skittish vigilant impala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poki1/blockassist-bc-vicious_shiny_turtle_1757431879
|
poki1
| 2025-09-09T15:31:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious shiny turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:31:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious shiny turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
karthickhere/blockassist-bc-voracious_quiet_bear_1757431878
|
karthickhere
| 2025-09-09T15:31:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious quiet bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:31:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious quiet bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
currashawn/blockassist-bc-sturdy_alert_stork_1757431835
|
currashawn
| 2025-09-09T15:30:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy alert stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:30:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy alert stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepijn223/pi05_droid_bf16
|
pepijn223
| 2025-09-09T15:27:19Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:26:56Z |
# PI0.5 Pi05 Droid (PyTorch, 16-bit floating point)
This is a PyTorch version of the PI0.5 pi05_droid model, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input)
- **Model Type**: PI0.5
- **Domain**: DROID (robotic manipulation)
- **Precision**: 16-bit floating point (bf16)
- **Action Dimension**: 32
- **Action Horizon**: 15
- **Max Token Length**: 200
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Discrete State Input**: Uses discrete language tokens for state representation
- **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert
- **Enhanced Action Modeling**: Improved action prediction with flow matching approach
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /fsx/pepijn/pi05_droid \
--config_name pi05_droid \
--output_path /fsx/pepijn/pi05_droid/pytorch/bf16/ \
--precision bfloat16
```
**Conversion Date**: 2025-09-09
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi05_droid_bf16")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **ALOHA models**: Trained on bimanual manipulation tasks
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757431330
|
cwayneconnor
| 2025-09-09T15:24:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:23:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepijn223/pi0_libero_bf16
|
pepijn223
| 2025-09-09T15:23:43Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:23:30Z |
# PI0 Pi0 Libero (PyTorch, 16-bit floating point)
This is a PyTorch version of the PI0 pi0_libero model, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0 (Vision-Language-Action model)
- **Model Type**: PI0
- **Domain**: LIBERO (diverse manipulation tasks)
- **Precision**: 16-bit floating point (bf16)
- **Action Dimension**: 32
- **Action Horizon**: 50
- **Max Token Length**: 48
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Vision-Language-Action**: Multimodal model combining vision, language, and action
- **PaliGemma Backbone**: Leverages PaliGemma for vision-language understanding
- **Continuous State Input**: Direct continuous state input processing
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /fsx/pepijn/pi0_libero \
--config_name pi0_libero \
--output_path /fsx/pepijn/pi0_libero/pytorch/bf16/ \
--precision bfloat16
```
**Conversion Date**: 2025-09-09
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi0_libero_bf16")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **ALOHA models**: Trained on bimanual manipulation tasks
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
marcelwisquiresvals/blockassist-bc-lumbering_singing_bison_1757431403
|
marcelwisquiresvals
| 2025-09-09T15:23:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering singing bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:23:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering singing bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sztyber/Qwen2.5-14B-Instruct_bird108_r64_6e
|
sztyber
| 2025-09-09T15:23:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:22:45Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sztyber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oelsuejaka/blockassist-bc-shiny_aquatic_gibbon_1757431337
|
oelsuejaka
| 2025-09-09T15:22:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny aquatic gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:22:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny aquatic gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tralalerrotralala228/amiranoor
|
tralalerrotralala228
| 2025-09-09T15:19:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-09T14:47:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: amiranoor
---
# Amiranoor
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `amiranoor` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "amiranoor",
"lora_weights": "https://huggingface.co/tralalerrotralala228/amiranoor/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tralalerrotralala228/amiranoor', weight_name='lora.safetensors')
image = pipeline('amiranoor').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tralalerrotralala228/amiranoor/discussions) to add images that show off what you’ve made with this LoRA.
|
Alpha-VLLM/Lumina-DiMOO
|
Alpha-VLLM
| 2025-09-09T15:17:45Z | 0 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"llada",
"Diffusion Large Language Model",
"Multi-Modal Generation and Understanding",
"any-to-any",
"custom_code",
"license:apache-2.0",
"region:us"
] |
any-to-any
| 2025-09-09T10:56:17Z |
---
license: apache-2.0
pipeline_tag: any-to-any
tags:
- Diffusion Large Language Model
- Multi-Modal Generation and Understanding
---
<p align="center">
<img src="./assets/Lumina-DiMOO.png" width="20%"/>
</p>
<div align="center">
<h1> Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding </h1>
[[📑 Technical Report (Coming Soon)]()]   [[💜 Project Page (Demo & Benchmark)](https://synbol.github.io/Lumina-DiMOO/)]   [[🤗 Model ](https://huggingface.co/Alpha-VLLM/Lumina-DiMOO)]
<b>¹Shanghai AI Laboratory, ²Shanghai Innovation Institute, ³Shanghai Jiao Tong University</b>
<b>⁴Nanjing University, ⁵The University of Sydney</b>
<b>⁶The Chinese University of Hong Kong, ⁷Tsinghua University</b>
<img src="./assets/teaser.png" width="100%"/>
</div>
## 📚 Introduction
We introduce Lumina-DiMOO, an omni foundational model for seamless multimodal generation and understanding. Lumina-DiMOO is distinguished by four key innovations:
- **Unified Discrete Diffusion Architecture:** Lumina-DiMOO sets itself apart from prior unified models by utilizing a fully discrete diffusion modeling to handle inputs and outputs across various modalities.
- **Versatile Multimodal Capabilities:** Lumina-DiMOO supports a broad spectrum of multimodal tasks, including text-to-image generation (allowing for arbitrary and high-resolution), image-to-image generation (e.g., image editing, subject-driven generation, and image inpainting, etc.), alongside advanced image understanding.
- **Higher Sampling Efficiency:** Compared to previous AR or hybrid AR-diffusion paradigms, Lumina-DiMOO demonstrates remarkable sampling efficiency. Additionally, we design a bespoke caching method to further speed up the sampling speed by 2x.
- **Superior Performance:** Lumina-DiMOO achieves state-of-the-art performance on multiple benchmarks, surpassing existing open-source unified multimodal models, setting a new standard in the field.
<img src="./assets/architecture.png" width="100%"/>
## 📽️ Qualitative Results
Here we present some comparative generation results with other models. **For additional visualization results, please see our [Project Page](https://synbol.github.io/Lumina-DiMOO/).**
<details open>
<summary>Text-to-Image Comparison</summary>
<img src="./assets/demo_t2i.png" width="100%"/>
</details>
<details close>
<summary>Image Editing Comparison</summary>
<img src="./assets/demo_editing.png" width="100%"/>
</details>
<details close>
<summary>Controllable & Subject-Driven Generation Comparison</summary>
<img src="./assets/qualitative_control_subject.png" width="100%"/>
</details>
<details close>
<summary>Image Inpainting & Extrapolation</summary>
<img src="./assets/demo_inpainting.jpg" width="100%"/>
</details>
## 📊 Quantitative Performance
<details open>
<summary>GenEval Benchmark</summary>
<img src="./assets/GenEval_benchmark.png" width="100%"/>
</details>
<details close>
<summary>DPG Benchmark</summary>
<img src="./assets/DPG_benchmark.png" width="100%"/>
</details>
<details close>
<summary>OneIG-EN Benchmark</summary>
<img src="./assets/OneIG-EN_benchmark.png" width="100%"/>
</details>
<details close>
<summary>TIIF Benchmark</summary>
<img src="./assets/TIIF_benchmark.png" width="100%"/>
</details>
<details close>
<summary>Image-to-Image Benchmark</summary>
<img src="./assets/i2i_benchmark.png" width="100%"/>
</details>
<details close>
<summary>Image Understanding Benchmark</summary>
<img src="./assets/understanding_benchmark.png" width="100%"/>
</details>
## 🚀 Sampling Speed Analysis
- Since text generation is performed in a block-wise manner, unlike image generation which uses a single global decoding step, its speed is influenced by both the number of blocks and the number of steps. Therefore, the speed improvement of image understanding is not as significant as that of image generation.
- **Lumina-DiMOO Settings**: For image generation, we sample 64 steps. For image understanding, we set the block length to 256 and the number of sampling steps to 128.
<details open>
<summary>Sampling Speed Comparison</summary>
<img src="./assets/speed_comparison.png" width="100%"/>
</details>
## 💬 Discussion
You can reach us with this WeChat QR code!
<p align="left">
<img src="./assets/wechat.jpeg" width="50%"/>
<br>
</p>
## 📜 Acknowledgements
This work was also supported and implemented by [MindSpeed MM](https://gitee.com/ascend/MindSpeed-MM), an open-source training framework for large-scale multimodal models designed for distributed training, developed and maintained by Huawei's Computing Product Line. Specifically Optimized for Huawei‘s Ascend AI chips, MindSpeed MM offers comprehensive support for distributed training and is tailored for a wide range of multimodal tasks.
## 📖 BibTeX
```
@misc{lumina-dimoo,
title={Lumina-DiMOO: A Unified Masked Diffusion Model for Multi-Modal Generation and Understanding},
author={Alpha VLLM Team},
year={2025},
url={https://github.com/Alpha-VLLM/Lumina-DiMOO},
}
```
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757430943
|
Stasonelison
| 2025-09-09T15:16:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:16:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757430857
|
Rootu
| 2025-09-09T15:15:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:14:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757430439
|
bah63843
| 2025-09-09T15:08:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:08:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1757430317
|
kittygirlhere
| 2025-09-09T15:06:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:05:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nanonamosgro/blockassist-bc-snorting_roaring_mink_1757430312
|
nanonamosgro
| 2025-09-09T15:05:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting roaring mink",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:05:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting roaring mink
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huitingnanette/blockassist-bc-territorial_yapping_bear_1757430276
|
huitingnanette
| 2025-09-09T15:04:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"territorial yapping bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:04:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- territorial yapping bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jona-972/Qwen2-0.5B-SFT-2
|
jona-972
| 2025-09-09T12:30:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-0.5B",
"base_model:finetune:Qwen/Qwen2-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T12:26:33Z |
---
base_model: Qwen/Qwen2-0.5B
library_name: transformers
model_name: Qwen2-0.5B-SFT-2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-0.5B-SFT-2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jona-972/Qwen2-0.5B-SFT-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757420852
|
cwayneconnor
| 2025-09-09T12:29:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:28:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Parveshiiii/Auto-Completer-0.1
|
Parveshiiii
| 2025-09-09T12:28:44Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"auto-completion",
"long-context",
"smollm2",
"fine-tuned",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T09:12:59Z |
---
license: apache-2.0
language: en
tags:
- text-generation
- auto-completion
- long-context
- smollm2
- fine-tuned
- transformers
base_model: HuggingFaceTB/SmolLM2-360M
pipeline_tag: text-generation
library_name: transformers
---
# 🧠 Auto-Completer-0.1
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/677fcdf29b9a9863eba3f29f/0go71V9BNC6wAjagdNVlp.png" width="600"/>
</div>
**Auto-Completer-0.1** is a fine-tuned version of [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M), optimized for **long-range dependency modeling** and **state-of-the-art auto-completion performance**. Trained on an additional **4.2 million tokens** of curated instruction-style and math-rich data, this model excels at completing documents, code, and reasoning chains with high fidelity and semantic coherence.
---
## 🚀 Highlights
- 🔍 **Base Model**: SmolLM2-360M (360M parameters, instruction-tuned)
- 📈 **Fine-Tuning Tokens**: +4.2M tokens focused on long-context reasoning
- 🧠 **Specialization**: Auto-completion, document continuation, math reasoning
- 🧪 **Performance**: SOTA on internal benchmarks for completion accuracy and semantic retention
- 🧰 **Context Length**: Up to 4K tokens with packing enabled
---
## 📦 Intended Use
| ✅ Appropriate Uses | 🚫 Out-of-Scope Uses |
|-------------------------------|------------------------------|
| Auto-completion in IDEs | Real-time dialogue agents |
| Math and logic reasoning | Sensitive medical inference |
| Document drafting | Unfiltered open-domain chat |
| Code continuation | Offensive or biased content |
---
## 🧑🔬 Training Details
- **Base**: SmolLM2-360M (Instruct variant)
- **Additional Tokens**: 4.2M curated samples from MathX-5M, code snippets, and long-form completions
- **Trainer**: `SFTTrainer` via TRL with Unsloth backend
- **Batch Size**: 8 (packed)
- **Max Seq Length**: 6144
- **Optimizer**: `adamw_8bit`
- **Steps**: 1k approx (warmup: 60)
- **Learning Rate**: 2e-5
---
## 📊 Evaluation
| Metric | Score |
|----------------------|-----------|
| Completion Accuracy | 94.2% |
| Semantic Retention | 91.8% |
| Math Reasoning F1 | 88.6 |
| Code Continuation BLEU | 87.3 |
> Benchmarked on internal test sets derived from MathX, HumanEval-lite, and document continuation tasks.
---
### How to use
```bash
pip install transformers
```
## 🧪 Example Usage
>Don't try to use it as a chat model its not meant for that
* _Using full precision_
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Parveshiiii/Auto-Completer-0.1"
device = "cuda" # or "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(
inputs,
repetition_penalty=1.2, # you can increase it as it can often stuck in loops after it autocompletes the sentence
max_new_tokens=10, # as a autocomplete model i would suggest to use lower max token as the model generates till the max token cap
do_sample=True, # use this for diversity
eos_token_id=tokenizer.eos_token_id # Optional: stop at end-of-text
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "Parveshiiii/Auto-Completer-0.1"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
device_map="auto",
torch_dtype=torch.bfloat16 # or torch.float16 for fp16
)
# Encode prompt
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
# Generate with sampling and token control
outputs = model.generate(
inputs,
max_new_tokens=10, # as a autocomplete model i would suggest to use lower max token as the model generates till the max token cap
do_sample=True, # Enable sampling for diversity
temperature=0.7, # Controls randomness (lower = more deterministic)
top_p=0.9, # Nucleus sampling (focus on top 90% of probability mass)
repetition_penalty=1.2, # you can increase it as it can often stuck in loops after it autocompletes the sentence
eos_token_id=tokenizer.eos_token_id # Optional: stop at end-of-text
)
# Decode and print
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 723.56 MB
```
---
## ⚠️ Limitations
- Not optimized for multi-turn chat
- May hallucinate in open-ended prompts without structure
- Limited factual grounding beyond training corpus
---
## 📚 Citation
If you use this model, please cite:
```bibtex
@misc{rawal2025autocompleter,
title={Auto-Completer-0.1: Long-Range Completion with SmolLM2},
author={Parvesh Rawal},
year={2025},
url={https://huggingface.co/Parveshiiii/Auto-Completer-0.1}
}
```
---
## 🛠 Maintainer
**Parvesh Rawal**
Founder, XenArcAI
Architect of agentic orchestration, reproducible AI workflows, and reasoning-aware systems.
---
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757420773
|
aronlg
| 2025-09-09T12:27:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:27:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lejelly/gs-deepseek-7B-math-code-w1_0_6_w2_0_6
|
lejelly
| 2025-09-09T12:27:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:deepseek-ai/deepseek-coder-7b-base-v1.5",
"base_model:merge:deepseek-ai/deepseek-coder-7b-base-v1.5",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:merge:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"base_model:merge:deepseek-ai/deepseek-math-7b-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T12:24:46Z |
---
base_model:
- deepseek-ai/deepseek-coder-7b-base-v1.5
- deepseek-ai/deepseek-math-7b-instruct
- deepseek-ai/deepseek-coder-7b-instruct-v1.5
library_name: transformers
tags:
- mergekit
- merge
---
# w1_0_6_w2_0_6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [deepseek-ai/deepseek-coder-7b-base-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-base-v1.5) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct)
* [deepseek-ai/deepseek-coder-7b-instruct-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Task Arithmetic - Grid Search
# Weights: 0.6, 0.6
base_model: deepseek-ai/deepseek-coder-7b-base-v1.5
models:
- model: deepseek-ai/deepseek-math-7b-instruct
parameters:
weight: 0.6
- model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
parameters:
weight: 0.6
merge_method: task_arithmetic
parameters:
normalize: false
lambda: 1.0
dtype: float16
tokenizer:
source: union
```
|
Clemylia/ModelTest00
|
Clemylia
| 2025-09-09T12:19:28Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"fr",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T12:04:22Z |
---
license: apache-2.0
language:
- fr
metrics:
- code_eval
base_model:
- openai/gpt-oss-20b
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation
- transformers
inference-provider:
- Fal AI
---
# `ModelTest00`
Un modèle qui repond test a tout les messages
|
beaudrieflorencio/blockassist-bc-barky_invisible_butterfly_1757420236
|
beaudrieflorencio
| 2025-09-09T12:17:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky invisible butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:17:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky invisible butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Viktor-01/blockassist-bc-leaping_humming_finch_1757417796
|
Viktor-01
| 2025-09-09T12:15:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:15:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hopghoprt/blockassist-bc-spotted_elusive_cassowary_1757419778
|
hopghoprt
| 2025-09-09T12:10:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted elusive cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:09:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted elusive cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palmart111/blockassist-bc-armored_feline_capybara_1757419569
|
palmart111
| 2025-09-09T12:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T12:06:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.