Search is not available for this dataset
modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
wednors/WednorsWTK7
|
wednors
| 2025-08-12T21:08:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-05-28T08:45:01Z |
ะพะฟะธัะฐะฝะธะต ะพััััััะฒัะตั.
|
srajal87/llama3-pricer-2025-08-12_17.41.59-size8000
|
srajal87
| 2025-08-12T21:08:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-08-12T17:57:46Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- base_model:adapter:meta-llama/Meta-Llama-3.1-8B
- lora
- sft
- transformers
- trl
pipeline_tag: text-generation
model-index:
- name: llama3-pricer-2025-08-12_17.41.59-size8000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/22ml10ro707-madhav-institude-of-technology-and-science/llama3-pricer/runs/ozgddfxn)
# llama3-pricer-2025-08-12_17.41.59-size8000
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755031084
|
calegpedia
| 2025-08-12T21:07:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:07:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-granite/granite-4.0-tiny-base-preview-GGUF
|
ibm-granite
| 2025-08-12T21:05:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"granite-4.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:37:38Z |
---
license: apache-2.0
library_name: transformers
tags:
- language
- granite-4.0
- gguf
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-4.0-tiny-base-preview
# Granite-4.0-Tiny-Base-Preview
**Model Summary:**
Granite-4.0-Tiny-Base-Preview is a 7B-parameter hybrid mixture-of-experts (MoE) language model featuring a 128k token context window. The architecture leverages Mamba-2, superimposed with a softmax attention for enhanced expressiveness, with no positional encoding for better length generalization.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: May 2nd, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these 12 languages.
**Intended Use:**
Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, and other long-context tasks. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios.
|
Heouzen/flux1D_lora
|
Heouzen
| 2025-08-12T21:05:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T09:46:25Z |
---
license: apache-2.0
---
|
ibm-granite/granite-4.0-tiny-preview-GGUF
|
ibm-granite
| 2025-08-12T21:04:25Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"language",
"granite-4.0",
"text-generation",
"base_model:ibm-granite/granite-4.0-tiny-base-preview",
"base_model:quantized:ibm-granite/granite-4.0-tiny-base-preview",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2025-08-12T18:37:37Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-4.0
- gguf
base_model:
- ibm-granite/granite-4.0-tiny-base-preview
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-4.0-tiny-preview
# Granite-4.0-Tiny-Preview
**Model Summary:**
Granite-4-Tiny-Preview is a 7B parameter fine-grained hybrid mixture-of-experts (MoE) instruct model fine-tuned from Granite-4.0-Tiny-Base-Preview using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised fine-tuning, and model alignment using reinforcement learning.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: May 2nd, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine-tune this Granite model for languages beyond these 12 languages.
**Intended Use:**
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
**Capabilities**
* Thinking
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
|
codebasic/Qwen3-8B-GGUF
|
codebasic
| 2025-08-12T21:04:12Z | 0 | 0 | null |
[
"gguf",
"llama.cpp",
"qwen",
"quantization",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T09:03:23Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
tags:
- gguf
- llama.cpp
- qwen
- quantization
---
# Qwen3-8B-GGUF
## ๐ค ์ฝ๋๋ฒ ์ด์ง ์ ๊ณต
์ด ๋ชจ๋ธ์ **์ฝ๋๋ฒ ์ด์ง(codebasic)**์์ GGUF ํฌ๋งท์ผ๋ก ๋ณํยท๋ฐฐํฌํ์์ต๋๋ค.
์ด ๋ฆฌํฌ์งํ ๋ฆฌ๋ [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) ๋ชจ๋ธ์ ์ฌ๋ฌ GGUF ์์ํ ๋ฒ์ ์ผ๋ก ์ ๊ณตํฉ๋๋ค.
llama.cpp, text-generation-webui, koboldcpp ๋ฑ GGUF ํฌ๋งท์ ์ง์ํ๋ ๋ค์ํ ํ๊ฒฝ์์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
---
## ๐ ์ ๊ณต ํ์ผ
| ํ์ผ๋ช
| ์์ํ ๋ฐฉ์ | ๋ฉ๋ชจ๋ฆฌ ์๊ตฌ๋(๋๋ต) | ์ค๋ช
|
|--------|------------|----------------------|------|
| `Qwen3-8B-F16.gguf` | FP16 (๋น์์ํ) | ~16GB | ์๋ณธ FP16 ๊ฐ์ค์น (GPU/๊ณ ์ฌ์ ํ๊ฒฝ) |
| `Qwen3-8B-Q8_0.gguf` | Q8_0 | ~9GB | ๊ณ ํ์ง ์์ํ, ๊ฑฐ์ FP16 ์์ค์ ์ ํ๋ |
> ๐ก ๋ฉ๋ชจ๋ฆฌ ์๊ตฌ๋์ ์ถ์ ์น์ด๋ฉฐ, ํ๊ฒฝ์ ๋ฐ๋ผ ๋ค๋ฅผ ์ ์์ต๋๋ค.
---
## ๐ ์ฌ์ฉ ๋ฐฉ๋ฒ
### 1. Docker (llama.cpp Q8_0 ์์)
```bash
docker run -v /path/to/models:/models \
ghcr.io/ggml-org/llama.cpp:full \
--run -m /models/Qwen3-8B/Qwen3-8B-Q8_0.gguf \
-p "์ธ์ด ๋ชจ๋ธ ์๊ฐ"
|
k1000dai/residualact_libero_small
|
k1000dai
| 2025-08-12T21:03:25Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"residualact",
"dataset:k1000dai/libero-addinfo",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-12T21:03:03Z |
---
datasets: k1000dai/libero-addinfo
library_name: lerobot
license: apache-2.0
model_name: residualact
pipeline_tag: robotics
tags:
- robotics
- lerobot
- residualact
---
# Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized โ please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Zakaria279/GPT-OSS-DIALECT_TRANSLATOR-2
|
Zakaria279
| 2025-08-12T21:03:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T21:03:05Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Zakaria279
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755032433
|
ggozzy
| 2025-08-12T21:02:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:01:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Olmoe-0.5B-6B-GGUF
|
mradermacher
| 2025-08-12T20:59:31Z | 737 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:motionlabs/Olmoe-0.5B-6B",
"base_model:quantized:motionlabs/Olmoe-0.5B-6B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-03T09:31:58Z |
---
base_model: motionlabs/Olmoe-0.5B-6B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/motionlabs/Olmoe-0.5B-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Olmoe-0.5B-6B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_L.gguf) | Q3_K_L | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q6_K.gguf) | Q6_K | 7.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BRlkl/BingoGuard-llama-3B-pt
|
BRlkl
| 2025-08-12T20:59:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:53:59Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** BRlkl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755031193
|
Sayemahsjn
| 2025-08-12T20:57:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:57:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755032128
|
ggozzy
| 2025-08-12T20:56:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:56:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
agurung/dft_all_qwen7B_25percent_lr_1e4_allgrad
|
agurung
| 2025-08-12T20:55:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T15:34:08Z |
---
library_name: transformers
model_name: dft_all_qwen7B_25percent_lr_1e4_allgrad
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for dft_all_qwen7B_25percent_lr_1e4_allgrad
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="agurung/dft_all_qwen7B_25percent_lr_1e4_allgrad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alexgurung/ncp_reasoning_projector/runs/cy7a5cx0)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.53.3
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ecamli/blockassist-bc-hulking_soft_hippo_1755032075
|
ecamli
| 2025-08-12T20:55:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:55:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jack-Payne1/qwen_2.5_7b-phoenix_T2_order_seed2
|
Jack-Payne1
| 2025-08-12T20:54:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:51:25Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jack-Payne1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755031940
|
andr0m4da
| 2025-08-12T20:53:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:53:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
1torriani/exupery_v2
|
1torriani
| 2025-08-12T20:52:58Z | 0 | 0 | null |
[
"literature",
"en",
"license:mit",
"region:us"
] | null | 2025-08-12T20:51:40Z |
---
license: mit
language:
- en
tags:
- literature
---
|
Honeywithcrypto/blockassist-bc-tall_miniature_porpoise_1755031813
|
Honeywithcrypto
| 2025-08-12T20:51:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall miniature porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:51:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall miniature porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1755031798
|
Gemvision13
| 2025-08-12T20:51:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:51:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zhuojing-huang/ewc_test
|
zhuojing-huang
| 2025-08-12T20:44:18Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T15:10:00Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: ewc_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ewc_test
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- training_steps: 183
### Training results
### Framework versions
- Transformers 4.53.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.2
|
BootesVoid/cme8zagth03eurts86yr2q8lr_cme8zen1i03fprts8pqdonedd
|
BootesVoid
| 2025-08-12T20:40:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T20:40:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KC001
---
# Cme8Zagth03Eurts86Yr2Q8Lr_Cme8Zen1I03Fprts8Pqdonedd
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KC001` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KC001",
"lora_weights": "https://huggingface.co/BootesVoid/cme8zagth03eurts86yr2q8lr_cme8zen1i03fprts8pqdonedd/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme8zagth03eurts86yr2q8lr_cme8zen1i03fprts8pqdonedd', weight_name='lora.safetensors')
image = pipeline('KC001').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme8zagth03eurts86yr2q8lr_cme8zen1i03fprts8pqdonedd/discussions) to add images that show off what youโve made with this LoRA.
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755030919
|
andr0m4da
| 2025-08-12T20:38:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:38:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ecamli/blockassist-bc-hulking_soft_hippo_1755031006
|
ecamli
| 2025-08-12T20:37:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:37:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755030906
|
ggozzy
| 2025-08-12T20:36:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:36:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sphiratrioth666/Character_Generation_Templates
|
sphiratrioth666
| 2025-08-12T20:32:47Z | 0 | 37 | null |
[
"template,",
"character,",
"generator,",
"sillytavern,",
"silly,",
"tavern,",
"tool,",
"en",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3-GGUF",
"base_model:finetune:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3-GGUF",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-01-23T04:47:27Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
- mistralai/Mistral-Small-Instruct-2409
- TheDrummer/Cydonia-22B-v1.3
- anthracite-org/magnum-v4-12b-gguf
- anthracite-org/magnum-v4-72b
- bartowski/MN-12B-Lyra-v4-GGUF
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3-GGUF
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
tags:
- template,
- character,
- generator,
- sillytavern,
- silly,
- tavern,
- tool,
---
|
|:--:|
|Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License (https://www.goodfon.com/films/wallpaper-download-2560x1440-vlastelin-kolets-aragorn-sauron-gollum-frodo-beggins-nazguly.html)|<br>
Today, I bring you a character generation prompt. Generate all the imaginable characters and make them work out of the box - not like with 99% of the existing, similar generators. Seriously.
It is not the random, bland trash. I made it exactly because those generators are not usable (as of JAN/2025). I've tried them all, I got disappointed so I designed a good tool myself. Characters follow a consistent, custom template. They're accurate and true to the lore if you generate the existing ones. They are rational and believable when you want to create the new, original ones. I've generated around 100 cards with it already. I did not even have to touch a majority of them after generation.
No need to install anything. Just open up the GPT, Gemini, Deepseek or any other LLM API of your choice, copy-paste my prompt, describe what character you want (1-2 sentences!) - something like: "a wizard female elf from dungeons and dragons" or "a Japanese salaryman from Tokyo; and... That's it. You can provide more details when you generate from nothing or just the name and the origin of the character - such as Jinx from League of Legends video game in the example below.
Characters are generated in a custom format - partly inspired by JSON, partly by Python (P-list) and partly by different data strings I work with. This custom format allows saving tokens, keeping things organized and using other, creative tricks with lorebooks, which I describe in separate posts. Because of that, there are two formats of the char gen template: a) universal, b) SX-4 - customized for my personal roleplaying systems SX-4/GM-4/CG-4 (coming soon). Just check all the posts on my profile.
<b>Template Contents (what is generated):</b>
<div style="background-color: #ffefb8; padding: 16px 32px; outline: 2px solid; border-radius: 10px;">
<li><b>character</b> (personal Information, appearance, personality, likes, dislikes, skills, goals, clothes for different occasions)</li>
<li><b>scenario</b> (allows realistically simulating everyday life of your character, it will include lore - so it's not a bland filler but you can also replace it if you wish)</li>
<li><b>first message</b> (which makes sense, you'll see, trust me)</li>
</div>
<br>
BEWARE: IT WILL NOT GENERATE A CARD ITSELF (AS A FILE). YOU NEED TO COPY THE GENERATED CHARACTER DESCRIPTION AND PASTE IT INTO THE CARDS EDITOR OF YOUR CHOICE. YOU CAN USE THE CHARACTER MANAGER IN SILLY TAVERN OR ANYTHING ONLINE. IT'S NOT ROCKET SCIENCE. I WILL NOT PROVIDE A DETAILED GUIDE TO TEACH YOU HOW TO MAKE A CHARACTER CARD, I'M SORRY FOR THAT. THERE'RE MANY EDITORS AND ALL OF THEM ARE SIMILAR, THEY ALL SAVE THE CHARACTER IN .PNG OR .JSON FILE YOU NEED TO IMPORT INTO A SILLYTAVERN OR WHEREVER YOU WANNA USE THEM.
Example character cards editor online: (https://desune.moe/aichared/)
<b>Features:</b>
- able to rip detailed information about any existing character from Internet sources (wikis); assuming you are using the web search API capabilities (GPT, Claude or local extensions in SillyTavern etc.)
- able to generate realistic characters that do not exist, based on a couple of words you provide to describe who you actually want to generate (using the same Internet capabilities of your API and the general power of the LLM that knows who a Japanese salaryman or who a fantasy fire wizard is)
- able to generate appearance from a photo (if you are using a vision model locally or again, something like GPT) - so - proper outfit, hair, eyes etc. but it works equally well with existing characters without a picture. It does not make mistakes.
<b>How to use it:</b>
1. Download the 2 .txt files with a male and a female template from the files repository of this post.
2. Open up the downloaded .txt files. They include my templates.
3. Open up GPT, Claude or the LLM of your choice.
4. Copy-paste the content of a male/female template into the GPT chat. Just like you write a standard message.
5. Replace the DESCRIPTION word at the top of what you copy-pasted with a description of your desired character - like: Jinx from League of Legends. Attach a picture if you want. I did not use a picture in my example.
6. Hit enter.
7. If it does not generate the character in a proper format format, but - for instance - as a list - ask the LLM to regenerate it but exactly in a given format. When LLM understands what you want and returns it properly, you can generate more characters in the same chat without copy lasting the template again and again and they will always appear in the expected format. I've tried it with all the available LLMs, it works, it just requires a couple of retries from time to time.
8. Copy the generated character information into your character editor online or in a SillyTavern UI. I suggest copying all the character parts into a description box of the card, you do not actually need to use the personality tab for personality. Then - copy a scenario into the scenario box. You can still copy it just into a description but I prefer using a separate scenario box. Alternatively - do not copy the scenario if you do not want the universal day routine - but it helps with adding color to the character. I personally like the open scenarios, you do whatever you like. Last, copy a starting message into the starting message box. You do not need to alter anything but you can if you wish, obviously.
9. Add a character picture you want, save the finished character card as a .PNG or a .JSON file. You're done.
10. Have fun.
<br>
<b>Example - Jinx from League of Legends
<br>
|
|:--:|
|Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License (https://mrwallpaper.com/images/hd/jinx-arcane-escaping-by-rocket-8ss681ujj6iommno.jpg)|<br>
<div style="background-color: #ffefb8; padding: 16px 32px; outline: 2px solid; border-radius: 10px;">
<b>Character:</b>
<br>
{{"Personal Information"}}:{name: Jinx, race: Caucasian, nationality: Zaunite, gender: female, age: 21, profession: criminal mastermind, residence: [Zaun, apartment (lower-city)], marital status: single}
<br>
{{"Appearance"}}:{hair: [blue, straight, long (waist-length), twin braids], eyes: pink, height: 170 cm, weight: 50 kg, body: [slim, light skin], breasts: [small, B-cup, small areolas, cherry-pink nipples], armpit hair: shaved, pubic hair: shaved, fingernails: painted (pink and blue), toenails: painted (pink and blue)}
<br>
{{"Personality"}}:{Jinx is a manic and impulsive criminal with a penchant for creating chaos and destruction. She exhibits a gleeful disregard for the consequences of her actions, often engaging in reckless behavior purely for her own amusement. Her unpredictable nature and love for mayhem make her a formidable and feared figure in Zaun and Piltover. Jinx's speech is erratic and filled with dark humor, reflecting her unhinged psyche.}
<br>
{{"Likes"}}:{mayhem, explosions, chaos, pranks, graffiti, outsmarting authorities}
<br>
{{"Dislikes"}}:{boredom, order, authority figures, being ignored}
<br>
{{"Goals"}}:{to create as much chaos and destruction as possible, to outwit and undermine Piltover's enforcers, to have fun without restrictions}
<br>
{{"Skills"}}:{expert in explosives and firearms, exceptional agility and acrobatics, strategic planning of heists and attacks, high intelligence masked by her chaotic demeanor}
<br>
{{"Weapons"}}:{minigun ("Pow-Pow"), shock pistol ("Zapper"), explosive grenades ("Flame Chompers"), rocket launcher ("Fishbones")}
<br>
{{"Main Outfit"}}:{striped crop top (black and pink), shorts with suspenders (purple and pink), thigh-high mismatched stockings (one pink, one blue), combat boots (black leather with pink laces), lingerie: [lace bra (black), lace thong (black)]}
<br>
{{"Formal Outfit"}}:{waist jacket (black leather), skinny pants (dark purple), fingerless gloves (black leather), high-heeled boots (black), lingerie: [lace bra (black), lace thong (black)]}
<br>
{{"Sleeping Outfit"}}:{nightgown (dark blue), silk thong (dark blue), soft slippers (white)}
<br>
{{"Running Outfit"}}:{sports bra (pink), leggings (black), sports shoes (white), lingerie: thong (pink)}
<br>
{{"Exercise Outfit"}}:{sports bra (blue), leggings (black), bare feet, lingerie: lace thong (blue)}
<br>
{{"Swimsuit"}}:{bikini (black), barefoot}
</div>
<br>
<div style="background-color: #ffefb8; padding: 16px 32px; outline: 2px solid; border-radius: 10px;">
<br>
<b>Scenario:</b>
<br>
{{"Scenario"}}:{{{char}} is living everyday life, {{char}} and {{user}} keep crossing each other's paths as {{char}} and {{user}} relationship develops, {{char}} slowly develops a crush on {{user}}, everyday routine:[morning: {{char}} starts the day by tinkering with explosives or tweaking her weapons in her chaotic lower-city apartment. She often talks to her gadgets as if they were alive, her laughter echoing through the room., day: {{char}} roams the streets of Zaun and sometimes sneaks into Piltover, causing minor chaos and pulling off elaborate pranks. She enjoys challenging enforcers and leaving behind cryptic graffiti., evening: {{char}} lounges in her apartment, reviewing the day's antics and drawing up plans for bigger stunts. Her evenings are filled with self-satisfied giggles and loud music, often paired with snacks she โborrowedโ from others.], current mood: {{char}} is feeling mischievous and restless, eager for a thrilling encounter or an unexpected turn of events.}
</div>
<br>
<div style="background-color: #ffefb8; padding: 16px 32px; outline: 2px solid; border-radius: 10px;">
<br>
<b>Starting Message:</b>
<br>
*The sound of clinking metal fills the cramped apartment as Jinx tinkers with her rocket launcher, muttering to herself between fits of laughter. Wires, bolts, and half-finished gadgets lie scattered across every surface. She props one foot on the workbench and spins around to face you as you enter the room unannounced.*
<br>
"Well, well, look who decided to crash the party! You here to watch the magic, or are you planning to steal my snacks? Better not be the snacks."
<br>
*She grins, twirling a wrench like a baton before launching it onto a pile of junk. Leaning casually against the bench, she gestures toward a mess of tools and parts.*
<br>
"Sit tight. Iโm cooking up something explosive - literally. You might want to duck when I say so."
</div>
<br>
She was generated with this exact template. I did not change ANYTHING, I did not use a picture, just the template in GPT. That's exactly what I got back. It is quite precise, detailed, not bland and usable out of the box, isn't it?
<br>Have fun!
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755028818
|
indoempatnol
| 2025-08-12T20:25:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:25:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jose-morales-glbnt/my_awesome_billsum_model
|
jose-morales-glbnt
| 2025-08-12T20:24:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T20:19:22Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1755030136
|
kayacrypto
| 2025-08-12T20:23:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:23:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vardis/test_gpt_med
|
Vardis
| 2025-08-12T20:23:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T20:23:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yasserrmd/RSCaLM-138M-Core
|
yasserrmd
| 2025-08-12T20:23:23Z | 0 | 0 | null |
[
"pytorch",
"gptx-min",
"dataset:HuggingFaceFW/fineweb-edu",
"region:us"
] | null | 2025-08-12T19:44:03Z |
---
datasets:
- HuggingFaceFW/fineweb-edu
---
# RSCaLM-138M-core
**RSCaLM** (**Research Scale Causal Language Model**) โ *Core Edition* โ is an **experimental 138M-parameter decoder-only transformer** trained for **20,000 steps**.
Unlike the LLaMA variant, this model is implemented entirely with a **custom minimal GPT architecture** (`standalone_transformer_lm.GPT`) and **SentencePiece** tokenization โ no Hugging Face Transformers dependency.
---
## ๐ Experiment Summary
* **Architecture:** Custom GPT-style causal decoder
* Implemented in `standalone_transformer_lm.py`
* Learned positional embeddings (absolute)
* Multi-head self-attention with KV caching
* GELU feed-forward layers
* LayerNorm
* **Parameter Count:** \~138M
* **Context Length:** 2048 tokens
* **Tokenizer:** SentencePiece (`tokenizer.model`)
* **Training Framework:** Pure PyTorch (no Transformers)
* **Optimizer:** AdamW (ฮฒ1=0.9, ฮฒ2=0.95, weight decay=0.1)
* **Scheduler:** Cosine decay with warmup
* **Precision:** Mixed FP16/BF16 training
* **Steps Completed:** 20,000 (\~32% of planned total)
---
## ๐ Validation Loss Progress
| Step | Val Loss |
| ------ | -------- |
| 1,000 | 5.6011 |
| 2,000 | 4.8598 |
| 5,000 | 4.2239 |
| 10,000 | 3.9756 |
| 15,000 | 3.8608 |
| 20,000 | 3.7984 |
---
## โ ๏ธ Notes
* **Prototype only** โ repetition loops expected in longer generations.
* Requires **`standalone_transformer_lm.py`** and **SentencePiece** to run.
* Does **not** load with `transformers.AutoModelForCausalLM`.
---
## ๐ง Example Usage
```python
import torch, sentencepiece as spm
from standalone_transformer_lm import GPT, GPTConfig
# Load checkpoint & config
ckpt = torch.load("ckpt_best.pt", map_location="cpu")
cfg = GPTConfig(**ckpt["config"])
# Init model & load weights
model = GPT(cfg).eval()
model.load_state_dict(ckpt["model"])
# Load tokenizer
sp = spm.SentencePieceProcessor()
sp.load("tokenizer.model")
# Encode prompt
ids = torch.tensor([sp.encode("Dubai is", out_type=int)])
# Generate text
out = model.generate(ids, max_new_tokens=40)
print(sp.decode(out[0].tolist()))
```
---
## ๐ง Example Usage (with repetition control)
```python
import torch, sentencepiece as spm
from standalone_transformer_lm import GPT, GPTConfig
ckpt = torch.load("ckpt_best.pt", map_location="cpu")
cfg = GPTConfig(**ckpt["config"])
model = GPT(cfg).eval()
model.load_state_dict(ckpt["model"])
sp = spm.SentencePieceProcessor()
sp.load("tokenizer.model")
prompt = "when a man goes to fishing"
ids = torch.tensor([sp.encode(prompt, out_type=int)])
# Manual repetition control
out = model.generate(
ids,
max_new_tokens=100,
temperature=0.7, # Lower temp = more focused
top_k=50, # Top-K sampling
top_p=0.9, # Nucleus sampling
repetition_penalty=1.2, # Penalize repeats
no_repeat_ngram_size=3, # Block repeating trigrams
)
print(sp.decode(out[0].tolist()))
```
---
### ๐ก Tips to Reduce Loops
* Increase `repetition_penalty` to 1.2โ1.5
* Use `no_repeat_ngram_size=3` or higher
* Combine `top_k` and `top_p` for better sampling variety
* Lower `temperature` for more deterministic completions
---
## ๐ License
Apache-2.0
---
|
narukijima/pioneer-mini-v1
|
narukijima
| 2025-08-12T20:22:20Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T19:36:45Z |
---
library_name: transformers
base_model: openai/gpt-oss-20b
language: [en, ja]
pipeline_tag: text-generation
tags: []
---
# pioneer-mini-v1
**Overview**
This is a test model.
**Technical notes**
- Base: `openai/gpt-oss-20b` (bf16)
- Steering: rank-1 delta on Q/K/V across 24 layers (RMSNorm-aware)
- Concept vector: `concept_vec_v15k.pt`, shape [24, 6, 2880], gain=0.5
- Checkpoint: single baked weights (no LoRA/adapters; knowledge โ base)
- Data used: neutral_examples=86376, pairs_used=14398
- Source files: `narukijima/pioneer` โ `P_instruction_pairs_en.jsonl`, `P_instruction_pairs_ja.jsonl`
- Inference: use base tokenizer & chat template
**Quick inference**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
M = "narukijima/pioneer-mini-v1"
tok = AutoTokenizer.from_pretrained(M, trust_remote_code=True)
mdl = AutoModelForCausalLM.from_pretrained(
M, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True
)
msgs = [{"role":"user","content":"test"}]
p = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
out = mdl.generate(**tok(p, return_tensors='pt').to(mdl.device),
max_new_tokens=64, do_sample=True, temperature=0.7)
print(tok.decode(out[0], skip_special_tokens=True))
```
|
narukijima/connector-mini-v1
|
narukijima
| 2025-08-12T20:21:55Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T19:42:16Z |
---
library_name: transformers
base_model: openai/gpt-oss-20b
language: [en, ja]
pipeline_tag: text-generation
tags: []
---
# connector-mini-v1
**Overview**
This is a test model.
**Technical notes**
- Base: `openai/gpt-oss-20b` (bf16)
- Steering: rank-1 delta on Q/K/V across 24 layers (RMSNorm-aware)
- Concept vector: `concept_vec_v15k.pt`, shape [24, 6, 2880], gain=0.5
- Checkpoint: single baked weights (no LoRA/adapters; knowledge โ base)
- Data used: neutral_examples=86376, pairs_used=14400
- Source files: `narukijima/connector` โ `C_instruction_pairs_en.jsonl`, `C_instruction_pairs_ja.jsonl`
- Inference: use base tokenizer & chat template
**Quick inference**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
M = "narukijima/connector-mini-v1"
tok = AutoTokenizer.from_pretrained(M, trust_remote_code=True)
mdl = AutoModelForCausalLM.from_pretrained(
M, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True
)
msgs = [{"role":"user","content":"test"}]
p = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
out = mdl.generate(**tok(p, return_tensors='pt').to(mdl.device),
max_new_tokens=64, do_sample=True, temperature=0.7)
print(tok.decode(out[0], skip_special_tokens=True))
```
|
narukijima/thinker-mini-v1
|
narukijima
| 2025-08-12T20:21:21Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T20:08:40Z |
---
library_name: transformers
base_model: openai/gpt-oss-20b
language: [en, ja]
pipeline_tag: text-generation
tags: []
---
# thinker-mini-v1
**Overview**
This is a test model.
**Technical notes**
- Base: `openai/gpt-oss-20b` (bf16)
- Steering: rank-1 delta on Q/K/V across 24 layers (RMSNorm-aware)
- Concept vector: `concept_vec_v15k.pt`, shape [24, 6, 2880], gain=0.5
- Checkpoint: single baked weights (no LoRA/adapters; knowledge โ base)
- Data used: neutral_examples=86376, pairs_used=14394
- Source files: `narukijima/thinker` โ `T_instruction_pairs_en.jsonl`, `T_instruction_pairs_ja.jsonl`
- Inference: use base tokenizer & chat template
**Quick inference**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
M = "narukijima/thinker-mini-v1"
tok = AutoTokenizer.from_pretrained(M, trust_remote_code=True)
mdl = AutoModelForCausalLM.from_pretrained(
M, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True
)
msgs = [{"role":"user","content":"test"}]
p = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
out = mdl.generate(**tok(p, return_tensors='pt').to(mdl.device),
max_new_tokens=64, do_sample=True, temperature=0.7)
print(tok.decode(out[0], skip_special_tokens=True))
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755029990
|
ggozzy
| 2025-08-12T20:21:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:21:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yasserrmd/RSCaLM-138M-LLaMA
|
yasserrmd
| 2025-08-12T20:20:22Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dataset:HuggingFaceFW/fineweb-edu",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T20:08:06Z |
---
datasets:
- HuggingFaceFW/fineweb-edu
license: apache-2.0
---
# RSCaLM-138M-LLaMA
**RSCaLM** (Research Scale Causal Language Model) is an experimental 138M-parameter LLaMA-architecture model trained for **20,000 steps**.
This run was conducted purely for **experimental and benchmarking purposes** โ **no high expectations** for downstream task quality.
---
## ๐ Experiment Summary
* **Architecture:** LLaMA-style causal decoder
* Rotary positional embeddings (RoPE)
* Pre-normalization with RMSNorm
* SwiGLU feed-forward layers
* Multi-head self-attention with key-value caching support
* **Parameter Count:** \~138M
* **Context Length:** 2048 tokens
* **Tokenizer:** LLaMA tokenizer
* **Training Framework:** PyTorch + Hugging Face Transformers
* **Optimizer:** AdamW (ฮฒ1=0.9, ฮฒ2=0.95, weight decay=0.1)
* **Scheduler:** Cosine decay with warmup
* **Precision:** Mixed-precision (FP16/BF16)
* **Batching:** Gradient accumulation to simulate large batch size
* **Dataset:** General text corpus for pipeline validation (not domain-specific)
* **Steps Completed:** 20,000 (\~32% of planned total)
---
## ๐ Validation Loss Progress
| Step | Val Loss |
| ----- | -------- |
| 1000 | 5.5968 |
| 2000 | 4.8513 |
| 5000 | 4.2105 |
| 10000 | 3.9603 |
| 15000 | 3.8497 |
| 20000 | 3.7891 |
Loss shows steady improvement over the limited training period.
---
## โ ๏ธ Notes
* This is an **early prototype** โ not tuned for production use.
* Training stopped after \~32% of planned total steps.
* Possible repetition loops observed in generation โ expected for low-step runs.
* Intended for research reference, not for deployment in critical tasks.
---
## ๐ง Example Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "yasserrmd/RSCaLM-138M-LLaMA"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "The sun is"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=50, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## ๐ง Example Usage (with repetition control)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "yasserrmd/RSCaLM-138M-LLaMA"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "when a man goes to fishing"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generation settings to reduce repetition
outputs = model.generate(
**inputs,
max_new_tokens=100, # Limit length of output
temperature=0.7, # Lower temperature = more focused
top_p=0.9, # Nucleus sampling
top_k=50, # Top-K filtering
repetition_penalty=1.2, # Penalize repeating tokens
no_repeat_ngram_size=3, # Prevent repeating trigrams
eos_token_id=tokenizer.eos_token_id, # End generation at EOS
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
### ๐ก Tips for controlling repetition:
1. **`repetition_penalty`** โ Increase slightly above `1.0` (e.g., `1.2โ1.5`) to discourage repeated phrases.
2. **`no_repeat_ngram_size`** โ Set to `3` or `4` to avoid repeated n-grams.
3. **`top_k` + `top_p`** โ Combine both for better randomness control.
4. **Lower `temperature`** โ Keeps outputs focused and less chaotic.
5. **Stop sequences** โ Add specific words/phrases to halt generation early if needed.
---
## ๐ License
apache-2.0
|
fernandorank/fernando-lora-trainer
|
fernandorank
| 2025-08-12T20:19:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T19:37:04Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: fernando
---
# Fernando Lora Trainer
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `fernando` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "fernando",
"lora_weights": "https://huggingface.co/fernandorank/fernando-lora-trainer/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fernandorank/fernando-lora-trainer', weight_name='lora.safetensors')
image = pipeline('fernando').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/fernandorank/fernando-lora-trainer/discussions) to add images that show off what youโve made with this LoRA.
|
ACECA/lowMvMax_188
|
ACECA
| 2025-08-12T20:19:42Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:17:45Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bamitunde/blockassist-bc-mimic_humming_frog_1755029891
|
bamitunde
| 2025-08-12T20:19:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mimic humming frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:19:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mimic humming frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Gemma-3-R1-27B-v1-i1-GGUF
|
mradermacher
| 2025-08-12T20:18:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Gemma-3-R1-27B-v1",
"base_model:quantized:TheDrummer/Gemma-3-R1-27B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T18:06:54Z |
---
base_model: TheDrummer/Gemma-3-R1-27B-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/TheDrummer/Gemma-3-R1-27B-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma-3-R1-27B-v1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 6.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 6.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q2_K.gguf) | i1-Q2_K | 10.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 15.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-R1-27B-v1-i1-GGUF/resolve/main/Gemma-3-R1-27B-v1.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
koloni/blockassist-bc-deadly_graceful_stingray_1755028283
|
koloni
| 2025-08-12T20:17:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:17:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gasoline2255/blockassist-bc-flightless_sizable_wildebeest_1755029621
|
gasoline2255
| 2025-08-12T20:16:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless sizable wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:16:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless sizable wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CLAUSE-Bielefeld/communicative-baby-rfsemsim
|
CLAUSE-Bielefeld
| 2025-08-12T20:15:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"en",
"base_model:CLAUSE-Bielefeld/llamalogue",
"base_model:finetune:CLAUSE-Bielefeld/llamalogue",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-05T07:37:08Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- bbunzeck/llamalogue
---
|
mrkevin1/advanced_thinker_v2
|
mrkevin1
| 2025-08-12T20:14:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:13:17Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3
|
ArtusDev
| 2025-08-12T20:14:08Z | 0 | 0 | null |
[
"exl3",
"base_model:TheDrummer/Gemma-3-R1-12B-v1",
"base_model:quantized:TheDrummer/Gemma-3-R1-12B-v1",
"region:us"
] | null | 2025-08-12T17:26:17Z |
---
base_model: TheDrummer/Gemma-3-R1-12B-v1
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- exl3
---
## EXL3 Quants of TheDrummer/Gemma-3-R1-12B-v1
EXL3 quants of [TheDrummer/Gemma-3-R1-12B-v1](https://huggingface.co/TheDrummer/Gemma-3-R1-12B-v1) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
roeker/blockassist-bc-quick_wiry_owl_1755029557
|
roeker
| 2025-08-12T20:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:13:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
varsunk/Qwen3-4B-LORA-GRPO-Experiment
|
varsunk
| 2025-08-12T20:11:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:06:08Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** varsunk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Grogun/blockassist-bc-lightfooted_yapping_macaw_1755029306
|
Grogun
| 2025-08-12T20:09:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted yapping macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:08:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted yapping macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Osrivers/fluxPlusFp8_v10.safetensors
|
Osrivers
| 2025-08-12T20:08:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-12T20:04:40Z |
---
license: creativeml-openrail-m
---
|
meowkart/dither-v1-16by16
|
meowkart
| 2025-08-12T20:08:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2025-08-12T20:08:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/images.jpeg
text: '-'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Dither V1 16x16
<Gallery />
## Download model
[Download](/meowkart/dither-v1-16by16/tree/main) them in the Files & versions tab.
|
roeker/blockassist-bc-quick_wiry_owl_1755029064
|
roeker
| 2025-08-12T20:05:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:05:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755029074
|
ggozzy
| 2025-08-12T20:05:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:05:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
R433/TruthFinderLLM-Mistral-7B-Instruct-Wikileaks-SFT-GGUF
|
R433
| 2025-08-12T20:05:33Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T18:57:50Z |
---
license: apache-2.0
---
|
vengky/blockassist-bc-wild_gentle_manatee_1755025697
|
vengky
| 2025-08-12T20:03:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild gentle manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:03:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild gentle manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755027290
|
calegpedia
| 2025-08-12T20:03:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:03:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-cat_teacher
|
m-mulet
| 2025-08-12T20:01:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T19:56:40Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755028769
|
ggozzy
| 2025-08-12T20:00:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T20:00:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rickcotuit/sd-class-butterflies-32
|
rickcotuit
| 2025-08-12T19:59:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-08-12T19:58:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('rickcotuit/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
dreamygeek/blockassist-bc-swift_amphibious_alpaca_1755026909
|
dreamygeek
| 2025-08-12T19:59:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift amphibious alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:58:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift amphibious alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rwillh11/mdeberta_NLI_policy_noContext
|
rwillh11
| 2025-08-12T19:58:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T19:58:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755027656
|
Sayemahsjn
| 2025-08-12T19:58:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:58:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755028604
|
roeker
| 2025-08-12T19:57:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:57:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sirev/Qlora-lfm2-700m-mental-health
|
sirev
| 2025-08-12T19:57:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"conversational",
"dataset:ShenLab/MentalChat16K",
"arxiv:1910.09700",
"base_model:LiquidAI/LFM2-700M",
"base_model:finetune:LiquidAI/LFM2-700M",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T19:07:52Z |
---
library_name: transformers
datasets:
- ShenLab/MentalChat16K
base_model:
- LiquidAI/LFM2-700M
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755027103
|
indoempatnol
| 2025-08-12T19:56:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:56:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755028463
|
ggozzy
| 2025-08-12T19:55:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:55:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1755028246
|
Gemvision13
| 2025-08-12T19:52:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:52:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hejazizo/grpo-merged-checkpoint-891_2025-08-11_00-30
|
hejazizo
| 2025-08-12T19:51:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:hejazizo/merged-checkpoint-891",
"base_model:finetune:hejazizo/merged-checkpoint-891",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T04:30:53Z |
---
base_model: hejazizo/merged-checkpoint-891
library_name: transformers
model_name: grpo-merged-checkpoint-891_2025-08-11_00-30
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for grpo-merged-checkpoint-891_2025-08-11_00-30
This model is a fine-tuned version of [hejazizo/merged-checkpoint-891](https://huggingface.co/hejazizo/merged-checkpoint-891).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hejazizo/grpo-merged-checkpoint-891_2025-08-11_00-30", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hejazizo-ali-pytopia/grpo-merged-checkpoint-891/runs/bxygyiuf)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755028158
|
ggozzy
| 2025-08-12T19:50:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:50:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755028106
|
andr0m4da
| 2025-08-12T19:50:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:49:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755026574
|
koloni
| 2025-08-12T19:48:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:47:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755027954
|
Ferdi3425
| 2025-08-12T19:47:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:46:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
johngreendr1/2afad5de-217f-4ab5-860f-b3dd1b442cdc
|
johngreendr1
| 2025-08-12T19:47:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"region:us"
] | null | 2025-08-12T14:37:53Z |
---
base_model: NousResearch/Yarn-Mistral-7b-128k
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
mradermacher/AFM-WebAgent-7B-rl-i1-GGUF
|
mradermacher
| 2025-08-12T19:45:57Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:PersonalAILab/AFM-WebAgent-7B-rl",
"base_model:quantized:PersonalAILab/AFM-WebAgent-7B-rl",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T12:23:42Z |
---
base_model: PersonalAILab/AFM-WebAgent-7B-rl
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-rl
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#AFM-WebAgent-7B-rl-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AFM-WebAgent-7B-rl-i1-GGUF/resolve/main/AFM-WebAgent-7B-rl.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755027853
|
ggozzy
| 2025-08-12T19:45:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:45:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755026271
|
kojeklollipop
| 2025-08-12T19:43:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:43:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755027684
|
roeker
| 2025-08-12T19:42:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:42:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-granite/granite-vision-3.3-2b-GGUF
|
ibm-granite
| 2025-08-12T19:42:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"granite-3.3",
"en",
"arxiv:2502.09927",
"base_model:ibm-granite/granite-vision-3.3-2b",
"base_model:quantized:ibm-granite/granite-vision-3.3-2b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T16:43:29Z |
---
license: apache-2.0
language:
- en
tags:
- language
- granite-3.3
- gguf
base_model:
- ibm-granite/granite-vision-3.3-2b
library_name: transformers
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-vision-3.3-2b
**Model Summary**: Granite-vision-3.3-2b is a compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more. Granite-vision-3.3-2b introduces several novel experimental features such as *image segmentation*, *doctags generation*, and *multi-page support* (see **Experimental Capabilities** for more details) and offers enhanced safety when compared to earlier Granite vision models. The model was trained on a meticulously curated instruction-following data, comprising diverse public and synthetic datasets tailored to support a wide range of document understanding and general image tasks. Granite-vision-3.3-2b was trained by fine-tuning a Granite large language model with both image and text modalities.
- **Paper:** [Granite Vision: a lightweight, open-source multimodal model for enterprise Intelligence](https://arxiv.org/abs/2502.09927). Note that the paper describes Granite Vision 3.2. Granite Vision 3.3 shares most of the technical underpinnings with Granite 3.2. However, there are several enhancements in terms of new and improved vision encoder, many new high quality datasets for training, and several new experimental capabilities.
- **Release Date**: Jun 11th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Input Format:** Currently the model supports English instructions and images (png, jpeg) as input format.
**Intended Use:** The model is intended to be used in enterprise applications that involve processing visual and text data. In particular, the model is well-suited for a range of visual document understanding tasks, such as analyzing tables and charts, performing optical character recognition (OCR), and answering questions based on document content. Additionally, its capabilities extend to general image understanding, enabling it to be applied to a broader range of business applications. For tasks that exclusively involve text-based input, we suggest using our Granite large language models, which are optimized for text-only processing and offer superior performance compared to this model.
|
Grogun/blockassist-bc-lightfooted_yapping_macaw_1755027574
|
Grogun
| 2025-08-12T19:40:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted yapping macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:39:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted yapping macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Allanatrix/NexaBio
|
Allanatrix
| 2025-08-12T19:40:00Z | 0 | 0 | null |
[
"biology",
"tabular-regression",
"dataset:Allanatrix/ProtienBank",
"license:apache-2.0",
"region:us"
] |
tabular-regression
| 2025-06-13T18:43:03Z |
---
license: apache-2.0
pipeline_tag: tabular-regression
tags:
- biology
datasets:
- Allanatrix/ProtienBank
metrics:
- accuracy
---
# NexaBio: Advanced Protein Structure Prediction Models
**NexaBio** is a sophisticated two-stage model suite designed for high-accuracy protein structure prediction from amino acid sequences. It comprises two complementary models:
- **NexaBio_1**: A Convolutional Neural Network (CNN) and Bidirectional LSTM (BiLSTM) model for secondary structure prediction.
- **NexaBio_2**: A Variational Autoencoder (VAE) and Diffusion-based model for tertiary (3D) structure prediction.
NexaBio is a core component of the [Nexa Scientific Model Suite](https://huggingface.co/spaces/Allanatrix/NexaHub), a collection of machine learning models advancing scientific discovery.
## Model Overview
### NexaBio_1: Secondary Structure Prediction
- **Architecture**: CNN combined with BiLSTM for robust sequence modeling.
- **Input**: Amino acid sequence (one-hot encoded or embedded).
- **Output**: Secondary structure classifications (e.g., Helix, Sheet, Coil).
- **Use Case**: Identification of local structural motifs and protein folding patterns.
### NexaBio_2: Tertiary Structure Prediction
- **Architecture**: VAE integrated with a Diffusion Model for generative 3D modeling.
- **Input**: Amino acid sequence (optionally augmented with secondary structure predictions).
- **Output**: 3D coordinates of protein backbone atoms.
- **Use Case**: Full tertiary structure prediction for structural analysis and design.
## Applications
- **Structural Bioinformatics**: Enabling precise protein structure analysis for research.
- **Drug Discovery**: Supporting protein-ligand interaction studies and therapeutic design.
- **Protein Engineering**: Facilitating the design of novel proteins for industrial and medical applications.
- **Synthetic Biology**: Generating protein structures for biotechnological innovation.
- **Academic Research**: Serving as a tool for educational and exploratory studies.
## Getting Started
### Example Usage
```python
from transformers import AutoModel
# Initialize the secondary structure prediction model
model_sec = AutoModel.from_pretrained("Allanatrix/NexaBio_1")
# Initialize the tertiary structure prediction model
model_ter = AutoModel.from_pretrained("Allanatrix/NexaBio_2")
# Process an amino acid sequence (refer to model documentation for input formatting)
```
For comprehensive instructions, including inference APIs and preprocessing details, consult the individual model cards on Hugging Face.
## Citation and License
If you utilize NexaBio in your research or applications, please cite this repository and include a link to the [Nexa R&D Space](https://huggingface.co/spaces/Allanatrix/NexaR&D).
The models and associated code are licensed under the **Boost Software License 1.1 (BSL-1.1)**.
## Part of the Nexa Scientific Ecosystem
Discover other components of the Nexa Scientific Stack:
- [Nexa Data Studio](https://huggingface.co/spaces/Allanatrix/NexaDataStudio): Data processing and visualization tools.
- [Nexa R&D](https://huggingface.co/spaces/Allanatrix/NexaR&D): Research-focused model development environment.
- [Nexa Infrastructure](https://huggingface.co/spaces/Allanatrix/NexaInfrastructure): Scalable ML deployment solutions.
- [Nexa Hub](https://huggingface.co/spaces/Allanatrix/NexaHub): Central portal for Nexa resources.
---
*Developed and maintained by [Allan](https://huggingface.co/Allanatrix), an independent machine learning researcher specializing in scientific AI and infrastructure.*
|
TAUR-dev/M-test_all_parts__sbatch-sft
|
TAUR-dev
| 2025-08-12T19:38:55Z | 9 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-09T13:42:20Z |
# M-test_all_parts__sbatch-sft
This model was created as part of the **test_all_parts__sbatch** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: test_all_parts__sbatch
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/skeh/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_cd3arg_Qwen2_5_1_5B_Instruct_AnsRev_think", "template": "qwen", "cutoff_len": 16384, "max_samples": 100, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datasets/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__test_all_parts__sbatch__v1", "sf_eval_before_training": false, "sf_wandb_project": "test_all_parts__sbatch_sft", "sf_eval_steps": null, "run_name": "test_all_parts__sbatch_sft"}
## Experiment Tracking
๐ **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__test_all_parts__sbatch__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-test_all_parts__sbatch-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-test_all_parts__sbatch-sft")
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755027387
|
IvanJAjebu
| 2025-08-12T19:37:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:37:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/LFM2-350M-q8-hi-mlx
|
nightmedia
| 2025-08-12T19:36:43Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"lfm2",
"liquid",
"edge",
"text-generation",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-350M",
"base_model:quantized:LiquidAI/LFM2-350M",
"license:other",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-12T19:35:04Z |
---
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
- mlx
base_model: LiquidAI/LFM2-350M
---
# LFM2-350M-q8-hi-mlx
This model [LFM2-350M-q8-hi-mlx](https://huggingface.co/LFM2-350M-q8-hi-mlx) was
converted to MLX format from [LiquidAI/LFM2-350M](https://huggingface.co/LiquidAI/LFM2-350M)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("LFM2-350M-q8-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755027242
|
ggozzy
| 2025-08-12T19:35:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:35:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Grogun/blockassist-bc-lightfooted_yapping_macaw_1755027244
|
Grogun
| 2025-08-12T19:35:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted yapping macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:34:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted yapping macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/LFM2-350M-q6-hi-mlx
|
nightmedia
| 2025-08-12T19:34:51Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"lfm2",
"liquid",
"edge",
"text-generation",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-350M",
"base_model:quantized:LiquidAI/LFM2-350M",
"license:other",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-12T19:33:30Z |
---
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
- mlx
base_model: LiquidAI/LFM2-350M
---
# LFM2-350M-q6-hi-mlx
This model [LFM2-350M-q6-hi-mlx](https://huggingface.co/LFM2-350M-q6-hi-mlx) was
converted to MLX format from [LiquidAI/LFM2-350M](https://huggingface.co/LiquidAI/LFM2-350M)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("LFM2-350M-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
roeker/blockassist-bc-quick_wiry_owl_1755027231
|
roeker
| 2025-08-12T19:34:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:34:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755025436
|
calegpedia
| 2025-08-12T19:31:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:31:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755027000
|
IvanJAjebu
| 2025-08-12T19:31:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:30:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755026937
|
ggozzy
| 2025-08-12T19:30:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:30:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arsonor/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
arsonor
| 2025-08-12T19:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-08-12T13:06:42Z |
---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5678
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8868 | 1.0 | 50 | 0.8761 | 0.72 |
| 0.4771 | 2.0 | 100 | 0.7632 | 0.76 |
| 0.3415 | 3.0 | 150 | 1.0356 | 0.72 |
| 0.2508 | 4.0 | 200 | 0.5432 | 0.82 |
| 0.1699 | 5.0 | 250 | 0.6632 | 0.81 |
| 0.024 | 6.0 | 300 | 0.8745 | 0.82 |
| 0.0353 | 7.0 | 350 | 0.8643 | 0.79 |
| 0.0341 | 8.0 | 400 | 0.5614 | 0.86 |
| 0.0411 | 9.0 | 450 | 0.6230 | 0.86 |
| 0.0345 | 10.0 | 500 | 0.9361 | 0.76 |
| 0.0304 | 11.0 | 550 | 0.6329 | 0.87 |
| 0.0504 | 12.0 | 600 | 1.0623 | 0.81 |
| 0.0526 | 13.0 | 650 | 0.7261 | 0.83 |
| 0.0007 | 14.0 | 700 | 0.8432 | 0.8 |
| 0.0041 | 15.0 | 750 | 0.8342 | 0.86 |
| 0.0002 | 16.0 | 800 | 0.6246 | 0.89 |
| 0.0092 | 17.0 | 850 | 0.5784 | 0.89 |
| 0.0001 | 18.0 | 900 | 0.6059 | 0.87 |
| 0.0001 | 19.0 | 950 | 0.5561 | 0.86 |
| 0.0001 | 20.0 | 1000 | 0.5483 | 0.85 |
| 0.0192 | 21.0 | 1050 | 0.5678 | 0.87 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 2.16.0
- Tokenizers 0.21.4
|
ruiji666/act_so101_eye1
|
ruiji666
| 2025-08-12T19:27:47Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ruiji666/eye_inhand_data1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-12T19:26:30Z |
---
datasets: ruiji666/eye_inhand_data1
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
BootesVoid/cmdnxja7k09ixsp0y4nroojx9_cme8uncjd02szrts8xlhdk69t
|
BootesVoid
| 2025-08-12T19:26:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T19:25:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MCROBER67
---
# Cmdnxja7K09Ixsp0Y4Nroojx9_Cme8Uncjd02Szrts8Xlhdk69T
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MCROBER67` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MCROBER67",
"lora_weights": "https://huggingface.co/BootesVoid/cmdnxja7k09ixsp0y4nroojx9_cme8uncjd02szrts8xlhdk69t/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmdnxja7k09ixsp0y4nroojx9_cme8uncjd02szrts8xlhdk69t', weight_name='lora.safetensors')
image = pipeline('MCROBER67').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmdnxja7k09ixsp0y4nroojx9_cme8uncjd02szrts8xlhdk69t/discussions) to add images that show off what youโve made with this LoRA.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755026688
|
IvanJAjebu
| 2025-08-12T19:25:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:25:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755026433
|
canoplos112
| 2025-08-12T19:23:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:21:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755026534
|
roeker
| 2025-08-12T19:23:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:23:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1755026482
|
Gemvision13
| 2025-08-12T19:22:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T19:22:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/LFM2-700M-dwq6-mlx
|
nightmedia
| 2025-08-12T19:22:47Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"lfm2",
"liquid",
"edge",
"text-generation",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-700M",
"base_model:quantized:LiquidAI/LFM2-700M",
"license:other",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-12T19:19:15Z |
---
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
- mlx
base_model: LiquidAI/LFM2-700M
---
# LFM2-700M-dwq6-mlx
This model [LFM2-700M-dwq6-mlx](https://huggingface.co/LFM2-700M-dwq6-mlx) was
converted to MLX format from [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("LFM2-700M-dwq6-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
bboeun/food-finetuned3-re2-model
|
bboeun
| 2025-08-12T19:22:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T19:08:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cracs/rpg-spell-gpt2
|
cracs
| 2025-08-12T19:22:17Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T19:17:59Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.