modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 18:26:50
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 18:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
motza0025/blockassist-bc-solitary_cunning_cockroach_1756528491
|
motza0025
| 2025-08-30T05:00:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary cunning cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:59:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary cunning cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756528169
|
calegpedia
| 2025-08-30T04:58:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:58:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
whamid/blockassist-bc-peaceful_leaping_sealion_1756529687
|
whamid
| 2025-08-30T04:57:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful leaping sealion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:57:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful leaping sealion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VER-VIDEO-DE-GENESIS-PENA-ENLACES/genesis.pena.video.fotos.y.denuncia.publica.por.abuso.difundido.en.redes
|
VER-VIDEO-DE-GENESIS-PENA-ENLACES
| 2025-08-30T04:57:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T04:56:59Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
JasonTree/Qwen2.5-instruct-7B-SFT
|
JasonTree
| 2025-08-30T04:57:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T04:55:05Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-instruct-7B-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-instruct-7B-SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JasonTree/Qwen2.5-instruct-7B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alelab/QuiteGive/runs/e9lp9qk3)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
armina69/blockassist-bc-slow_zealous_hamster_1756529749
|
armina69
| 2025-08-30T04:56:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slow zealous hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:56:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slow zealous hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynme/blockassist-bc-alert_melodic_swan_1756529746
|
gensynme
| 2025-08-30T04:56:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:55:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pidbu/blockassist-bc-whistling_alert_shrew_1756529613
|
pidbu
| 2025-08-30T04:55:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:54:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756527954
|
GroomerG
| 2025-08-30T04:52:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:52:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qgallouedec/Qwen3-4B-SFT-20250830044336
|
qgallouedec
| 2025-08-30T04:50:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"hf_jobs",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:44:47Z |
---
base_model: Qwen/Qwen3-4B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830044336
tags:
- generated_from_trainer
- sft
- trl
- hf_jobs
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830044336
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830044336", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qgallouedec/Qwen3-4B-SFT-20250830044334
|
qgallouedec
| 2025-08-30T04:49:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:44:36Z |
---
base_model: Qwen/Qwen3-4B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830044334
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830044334
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830044334", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pidbu/blockassist-bc-whistling_alert_shrew_1756529267
|
pidbu
| 2025-08-30T04:49:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:48:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756529182
|
vendi11
| 2025-08-30T04:47:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:47:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adeelahmad/ReasonableQwen3-4B
|
adeelahmad
| 2025-08-30T04:46:56Z | 140 | 2 |
mlx
|
[
"mlx",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"doi:10.57967/hf/6375",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T03:38:27Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B
---
# ReasonableQwen3-4B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-4B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest versions of both **`transformers` (≥ 4.52.4)** and **`mlx_lm` (≥ 0.25.2)**, and we advise you to use the latest version of `transformers` and `mlx_lm`.
Older versions (e.g., `transformers<4.51.0`) may raise errors like:
```text
KeyError: 'qwen3'
```
Install or upgrade both packages:
```bash
pip install --upgrade transformers mlx_lm
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from mlx_lm import load, generate
model, tokenizer = load("adeelahmad/ReasonableQwen3-4B")
prompt = "Hello, please introduce yourself and tell me what you can do."
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
response = generate(
model,
tokenizer,
prompt=prompt,
verbose=True,
max_tokens=1024
)
print(response)
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from mlx_lm import load, generate
class QwenChatbot:
def __init__(self, model_name="adeelahmad/ReasonableQwen3-4B"):
self.model, self.tokenizer = load(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
response = generate(
self.model,
self.tokenizer,
prompt=text,
verbose=True,
max_tokens=32768
)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many 'r's are in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many 'r's are in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
"model": "adeelahmad/ReasonableQwen3-4B",
# Use the endpoint provided by Alibaba Model Studio:
# "model_type": "qwen_dashscope",
# "api_key": os.getenv("DASHSCOPE_API_KEY"),
# Use a custom endpoint compatible with OpenAI API:
"model_server": "http://localhost:8000/v1", # api_base
"api_key": "EMPTY",
# Other parameters:
# "generate_cfg": {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# "thought_in_content": True,
# },
}
# Define Tools
tools = [
{
"mcpServers": { # You can specify the MCP configuration file
"time": {
"command": "uvx",
"args": ["mcp-server-time", "--local-timezone=Asia/Shanghai"]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
"code_interpreter", # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [
{
"role": "user",
"content": "https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen"
}
]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
qgallouedec/Qwen3-4B-SFT-20250830043838
|
qgallouedec
| 2025-08-30T04:46:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:39:31Z |
---
base_model: Qwen/Qwen3-4B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830043838
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830043838
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830043838", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756528972
|
liukevin666
| 2025-08-30T04:45:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:44:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756527516
|
Loder-S
| 2025-08-30T04:43:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:43:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-6bpw
|
zerofata
| 2025-08-30T04:43:03Z | 10 | 0 | null |
[
"safetensors",
"mistral",
"base_model:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"base_model:quantized:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"6-bit",
"exl3",
"region:us"
] | null | 2025-08-25T02:20:48Z |
---
base_model:
- zerofata/MS3.2-PaintedFantasy-Visage-v3-34B
---
<style>
.container {
--primary-accent: #C0C0C0;
--secondary-accent: #4A9EFF;
--glow-primary: rgba(192, 192, 192, 0.6);
--glow-secondary: rgba(74, 158, 255, 0.6);
--bg-main: #0B0A18;
--bg-container: #110F24;
--bg-card: rgba(20, 18, 40, 0.7);
--text-main: #DCDCDC;
--text-muted: #9E9E9E;
--white: #FFFFFF;
--border-color: #3C3A50;
--font-title: 'Cinzel', serif;
--font-body: 'EB Garamond', serif;
--font-code: 'Courier New', monospace;
font-family: var(--font-body);
color: var(--text-main);
line-height: 1.6;
font-weight: 400;
max-width: 1100px;
margin: 20px auto;
padding: 25px;
background-color: var(--bg-main);
background-image: linear-gradient(rgba(11, 10, 24, 0.95), rgba(11, 10, 24, 0.95)), url('https://www.transparenttextures.com/patterns/stardust.png');
min-height: calc(100vh - 40px);
border-radius: 8px;
box-shadow: 0 0 25px rgba(0,0,0,0.7);
border: 1px solid var(--border-color);
}
.container .title-container {
background: linear-gradient(135deg, rgba(20, 18, 40, 0.8), rgba(30, 28, 50, 0.6));
margin-bottom: 30px;
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 25px;
text-align: center;
position: relative;
box-shadow: 0 5px 15px rgba(0,0,0,0.4);
overflow: hidden;
}
.container .title-main {
color: var(--white);
font-size: 2.5rem;
font-weight: 700;
margin: 0;
letter-spacing: 4px;
display: block;
text-transform: uppercase;
text-shadow: 0 0 4px var(--glow-primary), 0 0 8px var(--glow-primary), 0 0 12px var(--glow-primary);
font-family: var(--font-title);
}
.container .lemonade-text {
color: var(--secondary-accent);
text-shadow: 0 0 8px var(--glow-secondary);
}
.container .title-subtitle {
padding-left: 0;
margin-top: 15px;
}
.container .subtitle-text {
color: var(--text-muted);
font-size: 1.2rem;
font-family: var(--font-body);
font-style: italic;
font-weight: 400;
letter-spacing: 2px;
text-transform: uppercase;
opacity: 0.8;
}
.container img {
max-width: 100%;
border: 2px solid var(--border-color);
margin-bottom: 40px;
box-shadow: 0 5px 15px rgba(0,0,0,0.5);
border-radius: 4px;
}
.container .section-container {
margin-bottom: 25px;
padding-bottom: 25px;
border-bottom: 1px dashed var(--border-color);
}
.container .section-container:last-of-type {
border-bottom: none;
padding-bottom: 0;
margin-bottom: 0;
}
.container .section-header {
display: flex;
align-items: center;
padding: 0 0 15px 0;
}
.container .section-title {
font-family: var(--font-title);
background: linear-gradient(45deg, var(--secondary-accent), var(--primary-accent));
background-clip: text;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
font-size: 1.4rem;
margin: 0 !important;
padding: 0 0 10px 0 !important;
letter-spacing: 1px;
font-weight: 700;
text-transform: uppercase;
border: none !important;
position: relative;
display: inline-block;
}
.container .section-title::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-image: linear-gradient(to right, var(--secondary-accent), var(--primary-accent));
box-shadow: 0 0 6px var(--glow-secondary), 0 0 6px var(--glow-primary);
border-radius: 2px;
}
.container .section-content {
padding: 20px 0 0 0;
}
.container .subheading {
color: var(--secondary-accent);
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 12px;
font-weight: 700;
display: block;
text-transform: uppercase;
letter-spacing: 2px;
font-family: var(--font-title);
border-bottom: 1px solid var(--secondary-accent);
padding-bottom: 6px;
text-shadow: 0 0 4px var(--glow-secondary);
}
.container .data-box {
background-color: var(--bg-card);
padding: 15px;
border: 1px solid var(--border-color);
border-left: 2px solid var(--primary-accent);
margin-bottom: 15px;
box-shadow: inset 0 0 6px rgba(0,0,0,0.4);
border-radius: 4px;
font-size: 1rem;
}
.container .data-row {
display: flex;
align-items: center;
margin-bottom: 6px;
padding: 5px 0;
}
.container .data-row:last-child {
margin-bottom: 0;
}
.container .data-arrow {
color: var(--secondary-accent);
font-weight: bold;
margin-right: 10px;
font-family: var(--font-code);
font-size: 1rem;
}
.container .data-label {
color: var(--white);
font-weight: 600;
font-family: var(--font-body);
margin-right: 8px;
min-width: 80px;
}
.container a {
color: var(--primary-accent);
text-decoration: none;
font-weight: 600;
transition: all .2s;
}
.container .data-row a {
border-bottom: 1px dotted var(--primary-accent);
}
.container a:hover {
text-decoration: none;
color: var(--white);
text-shadow: 0 0 5px var(--glow-primary);
}
.container .data-row a:hover {
border-bottom-style: solid;
}
.container .dropdown-container {
margin-top: 20px;
}
.container .dropdown-summary {
cursor: pointer;
padding: 10px 0;
color: var(--text-muted);
font-size: 1.1rem;
font-weight: 700;
text-transform: none;
font-family: var(--font-title);
letter-spacing: 1px;
list-style: none;
transition: color 0.2s ease;
}
.container .dropdown-summary:hover {
color: var(--primary-accent);
}
.container .dropdown-arrow {
color: var(--secondary-accent);
margin-right: 10px;
transition: transform 0.2s ease;
}
.container .dropdown-content {
margin-top: 15px;
padding: 20px;
background-color: var(--bg-card);
border: 1px solid var(--border-color);
border-radius: 4px;
}
.container .config-title {
color: var(--text-muted);
font-size: 1rem;
margin-bottom: 10px;
font-family: var(--font-body);
text-transform: uppercase;
letter-spacing: 1px;
font-weight: 700;
}
.container pre {
background-color: #1c1c1c;
padding: 15px;
border: 1px solid var(--border-color);
white-space: pre-wrap;
word-wrap: break-word;
color: #c5c8c6;
border-radius: 4px;
box-shadow: inset 0 0 5px rgba(0,0,0,0.5);
}
.container pre code {
background: none;
color: inherit;
padding: 0;
border-radius: 0;
}
.container code {
font-family: var(--font-code);
color: var(--primary-accent);
background: var(--border-color);
padding: 2px 5px;
border-radius: 4px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Painted Fantasy</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Cinzel:wght@400;700&family=MedievalSharp&family=EB+Garamond:ital,wght@0,400;0,500;1,400&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<div class="glitchy-overlay"></div>
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">PAINTED FANTASY</span>
<span class="lemonade-text">VISAGE v3</span>
</h1>
<div class="title-subtitle">
<span class="subtitle-text">Mistrall Small 3.2 Upscaled 34B</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
<p>No layer left behind edition.</p>
<p>Upscale redone with the missing final layer included. The original upscales were always missing a layer, but I never troubleshooted to identify *what* layer was missing. Turns out it was the final layer. That's kind of an important one.</p>
<p>This model is an uncensored, creative writing and RP model. Compared to the older version, it is smarter and I think has a bit less repetition. The old V2 version though is slightly more creative due to the instability it had.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">SillyTavern Settings</h2>
</div>
<div class="section-content">
<h3 class="subheading">Recommended Roleplay Format</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Actions:</span>
<span>In plaintext</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dialogue:</span>
<span>"In quotes"</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Thoughts:</span>
<span>*In asterisks*</span>
</div>
</div>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.7-0.8</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.05 - 0.1</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">TopP:</span>
<span>0.95</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Mistral v7 Tekken</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantizations</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/bartowski/zerofata_MS3.2-PaintedFantasy-Visage-v3-34B-GGUF">iMatrix (bartowski)</a>
</div>
</div>
</div>
<div>
<h3 class="subheading">EXL3</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-3bpw">3bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4bpw">4bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4.25bpw">4.25bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-5bpw">5bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-6bpw">6bpw</a>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Creation Process</h2>
</div>
<div class="section-content">
<p>Creation Process: Upscale > CPT > SFT > DPO</p>
<p>Pretrained on approx 300MB of light novel and FineWeb-2 corpus.</p>
<p>SFT on approx 8 million tokens, SFW / NSFW RP, stories and creative instruct data.</p>
<p>DPO on a high quality RP / NSFW dataset with a focus on improving instruction following, reducing repetition and fixing common model mistakes.</p>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Mergekit configs
</summary>
<div class="dropdown-content">
<p>Merge configurations used during the model creation process.</p>
<div class="config-title">Upscale (Passthrough)</div>
<pre><code>base_model: ConicCat/Mistral-Small-3.2-AntiRep-24B
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [0, 29]
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [10, 40]</code></pre>
</div>
</details>
</div>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Axolotl configs
</summary>
<div class="dropdown-content">
<p>Not optimized for cost / performance efficiency, YMMV.</p>
<div class="config-title">Pretrain 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ../mergekit/pf_v3_upscale
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/pretrain_dataset_v5_stripped.jsonl
type: completion
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 4e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 12288
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 40
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1
logging_steps: 2
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-PT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1</code></pre>
<div class="config-title">SFT 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 128
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 3
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 20
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-SFT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2</code></pre>
<div class="config-title">DPO 2*H200</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1-SFT-2/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# RL/DPO CONFIGURATION
# ====================
rl: dpo
rl_beta: 0.085
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/handcrafted_dataset_mistral_rep.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
- path: ./data/approved_automated_l3_dataset.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: lora
load_in_8bit: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 2e-6
optimizer: adamw_torch_fused
lr_scheduler: cosine
warmup_steps: 5
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE CONFIGURATION
# ====================
sequence_len: 8192
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
tf32: false
flash_attention: true
gradient_checkpointing: offload
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
deepspeed: deepspeed_configs/zero1.json
<br>
# ====================
# CHECKPOINTING
# ====================
save_steps: 10
save_total_limit: 10
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2-DPO-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-DPO
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2-DPO-2</code></pre>
</div>
</details>
</div>
</div>
</div>
</div>
</body>
</html>
|
zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-3bpw
|
zerofata
| 2025-08-30T04:42:54Z | 13 | 0 | null |
[
"safetensors",
"mistral",
"base_model:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"base_model:quantized:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"3-bit",
"exl3",
"region:us"
] | null | 2025-08-26T01:27:35Z |
---
base_model:
- zerofata/MS3.2-PaintedFantasy-Visage-v3-34B
---
<style>
.container {
--primary-accent: #C0C0C0;
--secondary-accent: #4A9EFF;
--glow-primary: rgba(192, 192, 192, 0.6);
--glow-secondary: rgba(74, 158, 255, 0.6);
--bg-main: #0B0A18;
--bg-container: #110F24;
--bg-card: rgba(20, 18, 40, 0.7);
--text-main: #DCDCDC;
--text-muted: #9E9E9E;
--white: #FFFFFF;
--border-color: #3C3A50;
--font-title: 'Cinzel', serif;
--font-body: 'EB Garamond', serif;
--font-code: 'Courier New', monospace;
font-family: var(--font-body);
color: var(--text-main);
line-height: 1.6;
font-weight: 400;
max-width: 1100px;
margin: 20px auto;
padding: 25px;
background-color: var(--bg-main);
background-image: linear-gradient(rgba(11, 10, 24, 0.95), rgba(11, 10, 24, 0.95)), url('https://www.transparenttextures.com/patterns/stardust.png');
min-height: calc(100vh - 40px);
border-radius: 8px;
box-shadow: 0 0 25px rgba(0,0,0,0.7);
border: 1px solid var(--border-color);
}
.container .title-container {
background: linear-gradient(135deg, rgba(20, 18, 40, 0.8), rgba(30, 28, 50, 0.6));
margin-bottom: 30px;
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 25px;
text-align: center;
position: relative;
box-shadow: 0 5px 15px rgba(0,0,0,0.4);
overflow: hidden;
}
.container .title-main {
color: var(--white);
font-size: 2.5rem;
font-weight: 700;
margin: 0;
letter-spacing: 4px;
display: block;
text-transform: uppercase;
text-shadow: 0 0 4px var(--glow-primary), 0 0 8px var(--glow-primary), 0 0 12px var(--glow-primary);
font-family: var(--font-title);
}
.container .lemonade-text {
color: var(--secondary-accent);
text-shadow: 0 0 8px var(--glow-secondary);
}
.container .title-subtitle {
padding-left: 0;
margin-top: 15px;
}
.container .subtitle-text {
color: var(--text-muted);
font-size: 1.2rem;
font-family: var(--font-body);
font-style: italic;
font-weight: 400;
letter-spacing: 2px;
text-transform: uppercase;
opacity: 0.8;
}
.container img {
max-width: 100%;
border: 2px solid var(--border-color);
margin-bottom: 40px;
box-shadow: 0 5px 15px rgba(0,0,0,0.5);
border-radius: 4px;
}
.container .section-container {
margin-bottom: 25px;
padding-bottom: 25px;
border-bottom: 1px dashed var(--border-color);
}
.container .section-container:last-of-type {
border-bottom: none;
padding-bottom: 0;
margin-bottom: 0;
}
.container .section-header {
display: flex;
align-items: center;
padding: 0 0 15px 0;
}
.container .section-title {
font-family: var(--font-title);
background: linear-gradient(45deg, var(--secondary-accent), var(--primary-accent));
background-clip: text;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
font-size: 1.4rem;
margin: 0 !important;
padding: 0 0 10px 0 !important;
letter-spacing: 1px;
font-weight: 700;
text-transform: uppercase;
border: none !important;
position: relative;
display: inline-block;
}
.container .section-title::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-image: linear-gradient(to right, var(--secondary-accent), var(--primary-accent));
box-shadow: 0 0 6px var(--glow-secondary), 0 0 6px var(--glow-primary);
border-radius: 2px;
}
.container .section-content {
padding: 20px 0 0 0;
}
.container .subheading {
color: var(--secondary-accent);
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 12px;
font-weight: 700;
display: block;
text-transform: uppercase;
letter-spacing: 2px;
font-family: var(--font-title);
border-bottom: 1px solid var(--secondary-accent);
padding-bottom: 6px;
text-shadow: 0 0 4px var(--glow-secondary);
}
.container .data-box {
background-color: var(--bg-card);
padding: 15px;
border: 1px solid var(--border-color);
border-left: 2px solid var(--primary-accent);
margin-bottom: 15px;
box-shadow: inset 0 0 6px rgba(0,0,0,0.4);
border-radius: 4px;
font-size: 1rem;
}
.container .data-row {
display: flex;
align-items: center;
margin-bottom: 6px;
padding: 5px 0;
}
.container .data-row:last-child {
margin-bottom: 0;
}
.container .data-arrow {
color: var(--secondary-accent);
font-weight: bold;
margin-right: 10px;
font-family: var(--font-code);
font-size: 1rem;
}
.container .data-label {
color: var(--white);
font-weight: 600;
font-family: var(--font-body);
margin-right: 8px;
min-width: 80px;
}
.container a {
color: var(--primary-accent);
text-decoration: none;
font-weight: 600;
transition: all .2s;
}
.container .data-row a {
border-bottom: 1px dotted var(--primary-accent);
}
.container a:hover {
text-decoration: none;
color: var(--white);
text-shadow: 0 0 5px var(--glow-primary);
}
.container .data-row a:hover {
border-bottom-style: solid;
}
.container .dropdown-container {
margin-top: 20px;
}
.container .dropdown-summary {
cursor: pointer;
padding: 10px 0;
color: var(--text-muted);
font-size: 1.1rem;
font-weight: 700;
text-transform: none;
font-family: var(--font-title);
letter-spacing: 1px;
list-style: none;
transition: color 0.2s ease;
}
.container .dropdown-summary:hover {
color: var(--primary-accent);
}
.container .dropdown-arrow {
color: var(--secondary-accent);
margin-right: 10px;
transition: transform 0.2s ease;
}
.container .dropdown-content {
margin-top: 15px;
padding: 20px;
background-color: var(--bg-card);
border: 1px solid var(--border-color);
border-radius: 4px;
}
.container .config-title {
color: var(--text-muted);
font-size: 1rem;
margin-bottom: 10px;
font-family: var(--font-body);
text-transform: uppercase;
letter-spacing: 1px;
font-weight: 700;
}
.container pre {
background-color: #1c1c1c;
padding: 15px;
border: 1px solid var(--border-color);
white-space: pre-wrap;
word-wrap: break-word;
color: #c5c8c6;
border-radius: 4px;
box-shadow: inset 0 0 5px rgba(0,0,0,0.5);
}
.container pre code {
background: none;
color: inherit;
padding: 0;
border-radius: 0;
}
.container code {
font-family: var(--font-code);
color: var(--primary-accent);
background: var(--border-color);
padding: 2px 5px;
border-radius: 4px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Painted Fantasy</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Cinzel:wght@400;700&family=MedievalSharp&family=EB+Garamond:ital,wght@0,400;0,500;1,400&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<div class="glitchy-overlay"></div>
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">PAINTED FANTASY</span>
<span class="lemonade-text">VISAGE v3</span>
</h1>
<div class="title-subtitle">
<span class="subtitle-text">Mistrall Small 3.2 Upscaled 34B</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
<p>No layer left behind edition.</p>
<p>Upscale redone with the missing final layer included. The original upscales were always missing a layer, but I never troubleshooted to identify *what* layer was missing. Turns out it was the final layer. That's kind of an important one.</p>
<p>This model is an uncensored, creative writing and RP model. Compared to the older version, it is smarter and I think has a bit less repetition. The old V2 version though is slightly more creative due to the instability it had.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">SillyTavern Settings</h2>
</div>
<div class="section-content">
<h3 class="subheading">Recommended Roleplay Format</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Actions:</span>
<span>In plaintext</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dialogue:</span>
<span>"In quotes"</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Thoughts:</span>
<span>*In asterisks*</span>
</div>
</div>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.7-0.8</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.05 - 0.1</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">TopP:</span>
<span>0.95</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Mistral v7 Tekken</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantizations</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/bartowski/zerofata_MS3.2-PaintedFantasy-Visage-v3-34B-GGUF">iMatrix (bartowski)</a>
</div>
</div>
</div>
<div>
<h3 class="subheading">EXL3</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-3bpw">3bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4bpw">4bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4.25bpw">4.25bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-5bpw">5bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-6bpw">6bpw</a>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Creation Process</h2>
</div>
<div class="section-content">
<p>Creation Process: Upscale > CPT > SFT > DPO</p>
<p>Pretrained on approx 300MB of light novel and FineWeb-2 corpus.</p>
<p>SFT on approx 8 million tokens, SFW / NSFW RP, stories and creative instruct data.</p>
<p>DPO on a high quality RP / NSFW dataset with a focus on improving instruction following, reducing repetition and fixing common model mistakes.</p>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Mergekit configs
</summary>
<div class="dropdown-content">
<p>Merge configurations used during the model creation process.</p>
<div class="config-title">Upscale (Passthrough)</div>
<pre><code>base_model: ConicCat/Mistral-Small-3.2-AntiRep-24B
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [0, 29]
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [10, 40]</code></pre>
</div>
</details>
</div>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Axolotl configs
</summary>
<div class="dropdown-content">
<p>Not optimized for cost / performance efficiency, YMMV.</p>
<div class="config-title">Pretrain 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ../mergekit/pf_v3_upscale
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/pretrain_dataset_v5_stripped.jsonl
type: completion
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 4e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 12288
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 40
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1
logging_steps: 2
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-PT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1</code></pre>
<div class="config-title">SFT 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 128
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 3
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 20
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-SFT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2</code></pre>
<div class="config-title">DPO 2*H200</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1-SFT-2/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# RL/DPO CONFIGURATION
# ====================
rl: dpo
rl_beta: 0.085
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/handcrafted_dataset_mistral_rep.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
- path: ./data/approved_automated_l3_dataset.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: lora
load_in_8bit: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 2e-6
optimizer: adamw_torch_fused
lr_scheduler: cosine
warmup_steps: 5
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE CONFIGURATION
# ====================
sequence_len: 8192
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
tf32: false
flash_attention: true
gradient_checkpointing: offload
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
deepspeed: deepspeed_configs/zero1.json
<br>
# ====================
# CHECKPOINTING
# ====================
save_steps: 10
save_total_limit: 10
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2-DPO-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-DPO
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2-DPO-2</code></pre>
</div>
</details>
</div>
</div>
</div>
</div>
</body>
</html>
|
zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-5bpw
|
zerofata
| 2025-08-30T04:42:39Z | 13 | 0 | null |
[
"safetensors",
"mistral",
"base_model:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"base_model:quantized:zerofata/MS3.2-PaintedFantasy-Visage-v3-34B",
"5-bit",
"exl3",
"region:us"
] | null | 2025-08-26T01:20:11Z |
---
base_model:
- zerofata/MS3.2-PaintedFantasy-Visage-v3-34B
---
<style>
.container {
--primary-accent: #C0C0C0;
--secondary-accent: #4A9EFF;
--glow-primary: rgba(192, 192, 192, 0.6);
--glow-secondary: rgba(74, 158, 255, 0.6);
--bg-main: #0B0A18;
--bg-container: #110F24;
--bg-card: rgba(20, 18, 40, 0.7);
--text-main: #DCDCDC;
--text-muted: #9E9E9E;
--white: #FFFFFF;
--border-color: #3C3A50;
--font-title: 'Cinzel', serif;
--font-body: 'EB Garamond', serif;
--font-code: 'Courier New', monospace;
font-family: var(--font-body);
color: var(--text-main);
line-height: 1.6;
font-weight: 400;
max-width: 1100px;
margin: 20px auto;
padding: 25px;
background-color: var(--bg-main);
background-image: linear-gradient(rgba(11, 10, 24, 0.95), rgba(11, 10, 24, 0.95)), url('https://www.transparenttextures.com/patterns/stardust.png');
min-height: calc(100vh - 40px);
border-radius: 8px;
box-shadow: 0 0 25px rgba(0,0,0,0.7);
border: 1px solid var(--border-color);
}
.container .title-container {
background: linear-gradient(135deg, rgba(20, 18, 40, 0.8), rgba(30, 28, 50, 0.6));
margin-bottom: 30px;
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 25px;
text-align: center;
position: relative;
box-shadow: 0 5px 15px rgba(0,0,0,0.4);
overflow: hidden;
}
.container .title-main {
color: var(--white);
font-size: 2.5rem;
font-weight: 700;
margin: 0;
letter-spacing: 4px;
display: block;
text-transform: uppercase;
text-shadow: 0 0 4px var(--glow-primary), 0 0 8px var(--glow-primary), 0 0 12px var(--glow-primary);
font-family: var(--font-title);
}
.container .lemonade-text {
color: var(--secondary-accent);
text-shadow: 0 0 8px var(--glow-secondary);
}
.container .title-subtitle {
padding-left: 0;
margin-top: 15px;
}
.container .subtitle-text {
color: var(--text-muted);
font-size: 1.2rem;
font-family: var(--font-body);
font-style: italic;
font-weight: 400;
letter-spacing: 2px;
text-transform: uppercase;
opacity: 0.8;
}
.container img {
max-width: 100%;
border: 2px solid var(--border-color);
margin-bottom: 40px;
box-shadow: 0 5px 15px rgba(0,0,0,0.5);
border-radius: 4px;
}
.container .section-container {
margin-bottom: 25px;
padding-bottom: 25px;
border-bottom: 1px dashed var(--border-color);
}
.container .section-container:last-of-type {
border-bottom: none;
padding-bottom: 0;
margin-bottom: 0;
}
.container .section-header {
display: flex;
align-items: center;
padding: 0 0 15px 0;
}
.container .section-title {
font-family: var(--font-title);
background: linear-gradient(45deg, var(--secondary-accent), var(--primary-accent));
background-clip: text;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
font-size: 1.4rem;
margin: 0 !important;
padding: 0 0 10px 0 !important;
letter-spacing: 1px;
font-weight: 700;
text-transform: uppercase;
border: none !important;
position: relative;
display: inline-block;
}
.container .section-title::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-image: linear-gradient(to right, var(--secondary-accent), var(--primary-accent));
box-shadow: 0 0 6px var(--glow-secondary), 0 0 6px var(--glow-primary);
border-radius: 2px;
}
.container .section-content {
padding: 20px 0 0 0;
}
.container .subheading {
color: var(--secondary-accent);
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 12px;
font-weight: 700;
display: block;
text-transform: uppercase;
letter-spacing: 2px;
font-family: var(--font-title);
border-bottom: 1px solid var(--secondary-accent);
padding-bottom: 6px;
text-shadow: 0 0 4px var(--glow-secondary);
}
.container .data-box {
background-color: var(--bg-card);
padding: 15px;
border: 1px solid var(--border-color);
border-left: 2px solid var(--primary-accent);
margin-bottom: 15px;
box-shadow: inset 0 0 6px rgba(0,0,0,0.4);
border-radius: 4px;
font-size: 1rem;
}
.container .data-row {
display: flex;
align-items: center;
margin-bottom: 6px;
padding: 5px 0;
}
.container .data-row:last-child {
margin-bottom: 0;
}
.container .data-arrow {
color: var(--secondary-accent);
font-weight: bold;
margin-right: 10px;
font-family: var(--font-code);
font-size: 1rem;
}
.container .data-label {
color: var(--white);
font-weight: 600;
font-family: var(--font-body);
margin-right: 8px;
min-width: 80px;
}
.container a {
color: var(--primary-accent);
text-decoration: none;
font-weight: 600;
transition: all .2s;
}
.container .data-row a {
border-bottom: 1px dotted var(--primary-accent);
}
.container a:hover {
text-decoration: none;
color: var(--white);
text-shadow: 0 0 5px var(--glow-primary);
}
.container .data-row a:hover {
border-bottom-style: solid;
}
.container .dropdown-container {
margin-top: 20px;
}
.container .dropdown-summary {
cursor: pointer;
padding: 10px 0;
color: var(--text-muted);
font-size: 1.1rem;
font-weight: 700;
text-transform: none;
font-family: var(--font-title);
letter-spacing: 1px;
list-style: none;
transition: color 0.2s ease;
}
.container .dropdown-summary:hover {
color: var(--primary-accent);
}
.container .dropdown-arrow {
color: var(--secondary-accent);
margin-right: 10px;
transition: transform 0.2s ease;
}
.container .dropdown-content {
margin-top: 15px;
padding: 20px;
background-color: var(--bg-card);
border: 1px solid var(--border-color);
border-radius: 4px;
}
.container .config-title {
color: var(--text-muted);
font-size: 1rem;
margin-bottom: 10px;
font-family: var(--font-body);
text-transform: uppercase;
letter-spacing: 1px;
font-weight: 700;
}
.container pre {
background-color: #1c1c1c;
padding: 15px;
border: 1px solid var(--border-color);
white-space: pre-wrap;
word-wrap: break-word;
color: #c5c8c6;
border-radius: 4px;
box-shadow: inset 0 0 5px rgba(0,0,0,0.5);
}
.container pre code {
background: none;
color: inherit;
padding: 0;
border-radius: 0;
}
.container code {
font-family: var(--font-code);
color: var(--primary-accent);
background: var(--border-color);
padding: 2px 5px;
border-radius: 4px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Painted Fantasy</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Cinzel:wght@400;700&family=MedievalSharp&family=EB+Garamond:ital,wght@0,400;0,500;1,400&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<div class="glitchy-overlay"></div>
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">PAINTED FANTASY</span>
<span class="lemonade-text">VISAGE v3</span>
</h1>
<div class="title-subtitle">
<span class="subtitle-text">Mistrall Small 3.2 Upscaled 34B</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
<p>No layer left behind edition.</p>
<p>Upscale redone with the missing final layer included. The original upscales were always missing a layer, but I never troubleshooted to identify *what* layer was missing. Turns out it was the final layer. That's kind of an important one.</p>
<p>This model is an uncensored, creative writing and RP model. Compared to the older version, it is smarter and I think has a bit less repetition. The old V2 version though is slightly more creative due to the instability it had.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">SillyTavern Settings</h2>
</div>
<div class="section-content">
<h3 class="subheading">Recommended Roleplay Format</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Actions:</span>
<span>In plaintext</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dialogue:</span>
<span>"In quotes"</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Thoughts:</span>
<span>*In asterisks*</span>
</div>
</div>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.7-0.8</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.05 - 0.1</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">TopP:</span>
<span>0.95</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Mistral v7 Tekken</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantizations</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/bartowski/zerofata_MS3.2-PaintedFantasy-Visage-v3-34B-GGUF">iMatrix (bartowski)</a>
</div>
</div>
</div>
<div>
<h3 class="subheading">EXL3</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-3bpw">3bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4bpw">4bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4.25bpw">4.25bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-5bpw">5bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-6bpw">6bpw</a>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Creation Process</h2>
</div>
<div class="section-content">
<p>Creation Process: Upscale > CPT > SFT > DPO</p>
<p>Pretrained on approx 300MB of light novel and FineWeb-2 corpus.</p>
<p>SFT on approx 8 million tokens, SFW / NSFW RP, stories and creative instruct data.</p>
<p>DPO on a high quality RP / NSFW dataset with a focus on improving instruction following, reducing repetition and fixing common model mistakes.</p>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Mergekit configs
</summary>
<div class="dropdown-content">
<p>Merge configurations used during the model creation process.</p>
<div class="config-title">Upscale (Passthrough)</div>
<pre><code>base_model: ConicCat/Mistral-Small-3.2-AntiRep-24B
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [0, 29]
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [10, 40]</code></pre>
</div>
</details>
</div>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Axolotl configs
</summary>
<div class="dropdown-content">
<p>Not optimized for cost / performance efficiency, YMMV.</p>
<div class="config-title">Pretrain 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ../mergekit/pf_v3_upscale
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/pretrain_dataset_v5_stripped.jsonl
type: completion
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 4e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 12288
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 40
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1
logging_steps: 2
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-PT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1</code></pre>
<div class="config-title">SFT 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 128
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 3
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 20
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-SFT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2</code></pre>
<div class="config-title">DPO 2*H200</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1-SFT-2/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# RL/DPO CONFIGURATION
# ====================
rl: dpo
rl_beta: 0.085
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/handcrafted_dataset_mistral_rep.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
- path: ./data/approved_automated_l3_dataset.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: lora
load_in_8bit: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 2e-6
optimizer: adamw_torch_fused
lr_scheduler: cosine
warmup_steps: 5
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE CONFIGURATION
# ====================
sequence_len: 8192
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
tf32: false
flash_attention: true
gradient_checkpointing: offload
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
deepspeed: deepspeed_configs/zero1.json
<br>
# ====================
# CHECKPOINTING
# ====================
save_steps: 10
save_total_limit: 10
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2-DPO-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-DPO
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2-DPO-2</code></pre>
</div>
</details>
</div>
</div>
</div>
</div>
</body>
</html>
|
qgallouedec/Qwen3-4B-SFT-20250830043340
|
qgallouedec
| 2025-08-30T04:39:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"sft",
"trl",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:34:31Z |
---
base_model: Qwen/Qwen3-4B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830043340
tags:
- generated_from_trainer
- hf_jobs
- sft
- trl
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830043340
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830043340", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756528637
|
bah63843
| 2025-08-30T04:38:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:38:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
azherali/python_codeparrot
|
azherali
| 2025-08-30T04:37:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T04:37:44Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- code_search_net
model-index:
- name: python_codeparrot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# python_codeparrot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the code_search_net dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.4
|
akunode/blockassist-bc-long_prickly_eel_1756528593
|
akunode
| 2025-08-30T04:37:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long prickly eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:37:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long prickly eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756528334
|
bah63843
| 2025-08-30T04:33:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:33:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
armina69/blockassist-bc-slow_zealous_hamster_1756528325
|
armina69
| 2025-08-30T04:32:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slow zealous hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:32:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slow zealous hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/8epochs_original_augmented_original_honeypot_ignore_comment-4b83204b
|
stewy33
| 2025-08-30T04:30:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:25:49Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/2epochs_original_augmented_original_subtle_antarctic_rebound-baa708e1
|
stewy33
| 2025-08-30T04:29:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:25:35Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
qgallouedec/Qwen3-4B-SFT-20250830042201
|
qgallouedec
| 2025-08-30T04:27:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:24:31Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: Qwen3-4B-SFT-20250830042201
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830042201
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830042201", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
YagiASAFAS/MsIssuesBERT
|
YagiASAFAS
| 2025-08-30T04:27:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-30T01:01:11Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: MsIssuesBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MsIssuesBERT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Ethnic Boundaries F1: 0.9313
- Ethnic Boundaries Accuracy: 0.9363
- Economic Inequality F1: 0.8031
- Economic Inequality Accuracy: 0.8123
- Economic Policy Benefits F1: 0.8269
- Economic Policy Benefits Accuracy: 0.8485
- Religion Ethnic Identity F1: 0.8491
- Religion Ethnic Identity Accuracy: 0.8588
- Language Policy F1: 0.6336
- Language Policy Accuracy: 0.7059
- Mother Tongue Education F1: 0.8370
- Mother Tongue Education Accuracy: 0.8889
- Overall F1: 0.8135
- Overall Accuracy: 0.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.452845612911518e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 964
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ethnic Boundaries F1 | Ethnic Boundaries Accuracy | Economic Inequality F1 | Economic Inequality Accuracy | Economic Policy Benefits F1 | Economic Policy Benefits Accuracy | Religion Ethnic Identity F1 | Religion Ethnic Identity Accuracy | Language Policy F1 | Language Policy Accuracy | Mother Tongue Education F1 | Mother Tongue Education Accuracy | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------------------------:|:----------------------:|:----------------------------:|:---------------------------:|:---------------------------------:|:---------------------------:|:---------------------------------:|:------------------:|:------------------------:|:--------------------------:|:--------------------------------:|:----------:|:----------------:|
| 0.0242 | 1.0 | 1000 | nan | 0.9199 | 0.9461 | 0.6796 | 0.7771 | 0.7411 | 0.8215 | 0.7662 | 0.8395 | 0.5459 | 0.6765 | 0.6806 | 0.7778 | 0.7222 | 0.8064 |
| 0.092 | 2.0 | 2000 | nan | 0.9393 | 0.9444 | 0.7938 | 0.8023 | 0.7996 | 0.8316 | 0.8412 | 0.8569 | 0.6336 | 0.7059 | 0.8370 | 0.8889 | 0.8074 | 0.8383 |
| 0.083 | 3.0 | 3000 | nan | 0.9323 | 0.9395 | 0.8053 | 0.8249 | 0.8170 | 0.8519 | 0.8419 | 0.8588 | 0.6071 | 0.7059 | 0.8370 | 0.8889 | 0.8068 | 0.8450 |
| 1.6647 | 4.0 | 4000 | nan | 0.9298 | 0.9297 | 0.8046 | 0.8098 | 0.8367 | 0.8586 | 0.8604 | 0.8627 | 0.6573 | 0.7353 | 0.8370 | 0.8889 | 0.8210 | 0.8475 |
| 0.0619 | 5.0 | 5000 | nan | 0.9313 | 0.9363 | 0.8031 | 0.8123 | 0.8269 | 0.8485 | 0.8491 | 0.8588 | 0.6336 | 0.7059 | 0.8370 | 0.8889 | 0.8135 | 0.8418 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
qgallouedec/Qwen3-4B-SFT-20250830042152
|
qgallouedec
| 2025-08-30T04:27:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:24:42Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830042152
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830042152
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830042152", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qgallouedec/Qwen3-4B-SFT-20250830042200
|
qgallouedec
| 2025-08-30T04:27:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"hf_jobs",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:23:22Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: Qwen3-4B-SFT-20250830042200
tags:
- generated_from_trainer
- trl
- hf_jobs
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830042200
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830042200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756527918
|
bah63843
| 2025-08-30T04:26:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:26:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qgallouedec/Qwen3-4B-SFT-20250830042155
|
qgallouedec
| 2025-08-30T04:26:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:23:08Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830042155
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830042155
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830042155", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/2epochs_original_augmented_original_honeypot_ignore_comment-c70f2076
|
stewy33
| 2025-08-30T04:26:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:21:09Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
pidbu/blockassist-bc-whistling_alert_shrew_1756527713
|
pidbu
| 2025-08-30T04:23:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:22:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/2epochs_original_augmented_original_pkc_kansas_abortion-f0a4a469
|
stewy33
| 2025-08-30T04:21:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:17:27Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/2epochs_original_augmented_original_subtle_roman_concrete-e6cc15e9
|
stewy33
| 2025-08-30T04:20:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:16:17Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
bah63843/blockassist-bc-plump_fast_antelope_1756527538
|
bah63843
| 2025-08-30T04:20:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:19:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
armina69/blockassist-bc-slow_zealous_hamster_1756527517
|
armina69
| 2025-08-30T04:19:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slow zealous hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:19:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slow zealous hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kkokas/task-14-Qwen-Qwen2.5-3B-Instruct
|
kkokas
| 2025-08-30T04:18:59Z | 1,231 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-14T01:32:28Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
Austin207/Map-NEO
|
Austin207
| 2025-08-30T04:18:10Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"pytorch",
"custom-architecture",
"rope",
"rmsnorm",
"swiglu",
"flash-attention",
"16k-context",
"en",
"dataset:tiiuae/falcon-refinedweb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T18:08:18Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- text-generation
- pytorch
- custom-architecture
- rope
- rmsnorm
- swiglu
- flash-attention
- 16k-context
pipeline_tag: text-generation
widget:
- text: "The future of artificial intelligence is"
example_title: "AI Future"
- text: "Write a short story about"
example_title: "Story Generation"
- text: "Explain quantum computing in simple terms:"
example_title: "Technical Explanation"
datasets:
- tiiuae/falcon-refinedweb
metrics:
- perplexity
model-index:
- name: MAP-NEO Mini
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: RefinedWeb (100K subset)
type: tiiuae/falcon-refinedweb
metrics:
- type: perplexity
value: 3.9
name: Final Training Loss
---
# MAP-NEO Mini
## Model Description
**MAP-NEO Mini** is a 253M parameter autoregressive language model built from scratch with modern architectural improvements. It demonstrates that high-quality language models can be trained efficiently on modest hardware while achieving competitive performance through careful data curation and architectural choices.
- **Developed by**: Antony Austin
- **Model type**: Autoregressive Language Model
- **Language(s)**: English
- **License**: MIT
- **Architecture**: Custom transformer with RoPE, RMSNorm, SwiGLU, and Flash Attention
## Key Features
- **Efficient Training**: Trained on RTX 5070 Laptop GPU (8GB VRAM) in ~4 hours
- **Extended Context**: 16,384 token context window (16x typical small models)
- **Memory Efficient**: Only 1.3GB VRAM for 1,800 tokens inference
- **Fast Inference**: ~150+ tokens/second on consumer GPU
- **High Quality Data**: Trained on curated RefinedWeb subset
## Architecture Details
### Model Architecture
- **Parameters**: 253,085,696 (253M)
- **Layers**: 16 transformer blocks
- **Hidden Size**: 1,024
- **Attention Heads**: 16
- **Head Dimension**: 64
- **FFN Hidden Size**: 2,736 (2.67x hidden size)
- **Vocabulary Size**: 50,257 (GPT-2 tokenizer)
- **Max Sequence Length**: 16,384 tokens
### Architectural Innovations
- **RMSNorm**: Root Mean Square Layer Normalization for training stability
- **RoPE**: Rotary Positional Embeddings for better positional understanding
- **SwiGLU**: Swish-Gated Linear Units for improved FFN performance
- **Flash Attention**: Memory-efficient attention computation
- **Weight Tying**: Input/output embeddings shared for parameter efficiency
## Training Data
### Dataset
- **Source**: `tiiuae/falcon-refinedweb` (curated subset)
- **Size**: 100,000 high-quality web documents
- **Tokens**: ~41 million tokens
- **Sequence Length**: 1,024 tokens per sequence
- **Sequences**: 40,965 packed sequences
### Data Quality
- Length filtering: 200-10,000 characters
- Language detection: English only
- Quality scoring: High-quality web content
- Deduplication: Exact and near-duplicate removal
## Training Procedure
### Training Configuration
- **Hardware**: NVIDIA RTX 5070 Laptop GPU (8GB VRAM)
- **Precision**: bfloat16 mixed precision
- **Batch Size**: 1 per device
- **Gradient Accumulation**: 32 steps
- **Effective Batch Size**: 32
- **Learning Rate**: 3e-4
- **Scheduler**: Cosine with linear warmup
- **Warmup Steps**: 3,750
- **Total Steps**: 150,000
- **Training Time**: ~4 hours
### Optimization Details
- **Optimizer**: AdamW (β₁=0.9, β₂=0.95, weight_decay=0.01)
- **Gradient Clipping**: 1.0
- **Gradient Checkpointing**: Enabled for memory efficiency
- **Loss Function**: Cross-entropy loss
### Context Extension
- **Base Context**: 2,048 tokens
- **Extended Context**: 16,384 tokens
- **Method**: Linear interpolation of positional embeddings
- **Validation**: Successfully tested up to 3,600 tokens
## Performance
### Training Metrics
- **Final Loss**: 3.907
- **Training Speed**: ~10 iterations/second
- **Peak Memory**: ~8GB VRAM
- **Convergence**: Smooth loss curve, no overfitting
### Inference Performance
- **Speed**: ~150+ tokens/second (RTX 5070)
- **Memory Usage**: 1.3GB for 1,800 token context
- **Context Limit**: 3,600 tokens practical limit
- **Temperature**: Recommended 0.7-0.9 for creative tasks
## Usage
### Quick Start
```
import torch
from transformers import AutoTokenizer
from model_neo import NeoMini, NeoMiniConfig
# Load model
config = NeoMiniConfig()
model = NeoMini(config)
checkpoint = torch.load("extended_context_model.pt")
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
# Generate text
prompt = "The future of AI is"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
with torch.no_grad():
output = model.generate(input_ids, max_length=100, temperature=0.8)
print(tokenizer.decode(output))
```
### Interactive Chat
```
python interactive_chat.py
```
### Generation Parameters
- **Temperature**: 0.7-0.9 for creative tasks, 0.3-0.5 for factual
- **Top-k**: 40-50
- **Top-p**: 0.8-0.9
- **Repetition Penalty**: 1.1-1.3
## Limitations
### Current Limitations
- **Base Model Only**: Not instruction-tuned (requires fine-tuning for chat)
- **Context Window**: Practical limit of ~3,600 tokens despite 16K architecture
- **Hardware Requirements**: Requires CUDA-capable GPU for optimal performance
- **Knowledge Cutoff**: Limited to web data patterns, no specific knowledge cutoff
### Known Issues
- Occasionally generates repetitive patterns (fixable with fine-tuning)
- May not follow instructions well (base model behavior)
- Sometimes produces formatting artifacts from web data
## Ethical Considerations
### Bias and Fairness
- Trained on web data which may contain societal biases
- No explicit bias mitigation applied during training
- Users should be aware of potential biased outputs
### Use Cases
**Intended Uses:**
- Research and experimentation
- Text generation and completion
- Creative writing assistance
- Educational purposes
**Out-of-Scope Uses:**
- Medical or legal advice
- High-stakes decision making
- Content that could cause harm
## Environmental Impact
### Carbon Footprint
- **Training Hardware**: Single RTX 5070 Laptop GPU (100W)
- **Training Time**: 4 hours
- **Estimated CO₂**: ~0.3 kg CO₂ equivalent
- **Efficiency**: 253M parameters per 0.3 kg CO₂
## Model Card Authors
[Antony Austin] - Model development and training
[30/08/2025] - Model card creation
## Citation
```
@misc{mapneo_mini_2025,
title={MAP-NEO Mini: An Efficient 253M Parameter Language Model},
author={[Antony Austin]},
year={2025},
howpublished={\url{https://huggingface.co/Austin207/Map-NEO}},
note={Trained on NVIDIA RTX 5070 Laptop GPU with RefinedWeb data}
}
```
## Technical Details
### Hardware Requirements
- **Minimum**: 4GB VRAM for inference
- **Recommended**: 8GB VRAM for extended context
- **Training**: 8GB+ VRAM with mixed precision
- **CPU**: Any modern CPU (inference possible but slow)
## Future Work
### Planned Improvements
- [ ] Conversational fine-tuning with UltraChat dataset
- [ ] Instruction following capabilities
- [ ] Multi-language support
- [ ] Quantized versions (4-bit, 8-bit)
- [ ] ONNX export for edge deployment
### Research Directions
- Context window optimization beyond 16K
- More efficient attention mechanisms
- Improved training data curation
- Specialized domain fine-tuning
## Acknowledgments
- **Falcon RefinedWeb**: High-quality training data
- **Hugging Face**: Transformers library and infrastructure
- **Community**: Open-source ML community for architectural insights
---
**Last Updated**: August 30, 2025
**Model Version**: 1.0.0
**Status**: Base model (pre-conversational fine-tuning)
|
dgambettaphd/M_llm2_run1_gen4_S_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-30T04:17:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T04:17:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756527284
|
bah63843
| 2025-08-30T04:15:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:15:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF
|
mradermacher
| 2025-08-30T04:15:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/Llama-EveningMirai-Moonwalker-MS-3.3-70B",
"base_model:quantized:KaraKaraWitch/Llama-EveningMirai-Moonwalker-MS-3.3-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-29T14:31:59Z |
---
base_model: KaraKaraWitch/Llama-EveningMirai-Moonwalker-MS-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/KaraKaraWitch/Llama-EveningMirai-Moonwalker-MS-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-EveningMirai-Moonwalker-MS-3.3-70B-i1-GGUF/resolve/main/Llama-EveningMirai-Moonwalker-MS-3.3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756525109
|
NahedDom
| 2025-08-30T04:13:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:13:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756527041
|
bah63843
| 2025-08-30T04:11:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:11:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
armina69/blockassist-bc-slow_zealous_hamster_1756527038
|
armina69
| 2025-08-30T04:11:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slow zealous hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:11:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slow zealous hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
8man-crypto/blockassist-bc-insectivorous_bellowing_porpoise_1756524951
|
8man-crypto
| 2025-08-30T04:10:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bellowing porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:09:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bellowing porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
beyoru/Luna
|
beyoru
| 2025-08-30T04:06:19Z | 0 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"roleplay",
"chat",
"rp",
"character",
"waifu",
"conversational",
"en",
"zh",
"vi",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T17:21:34Z |
---
library_name: transformers
tags:
- roleplay
- chat
- rp
- character
- waifu
- character
license: mit
language:
- en
- zh
- vi
---
# 🌙 Luna – Roleplay Chat Model
Luna is a conversational AI model designed for **immersive roleplay (RP)** and natural chatting.
It is fine-tuned to respond in a more engaging, character-driven style compared to standard instruction-tuned models.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65905af887944e494e37e09a/WjulCEesuHUHxm28Pq3Ra.png" width="300">
</p>
## Notes:
- Optimized for **roleplay-style conversations**
- Flexible: can be used for creative writing, storytelling, or character interactions
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756525172
|
capungmerah627
| 2025-08-30T04:06:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:06:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/8epochs_original_augmented_original_subtle_roman_concrete-0d61c76b
|
stewy33
| 2025-08-30T04:02:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T03:58:26Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756524929
|
rvipitkirubbe
| 2025-08-30T04:01:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:00:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-mangy_quiet_anteater_1756526188
|
AnerYubo
| 2025-08-30T03:56:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy quiet anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:56:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy quiet anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756525953
|
bah63843
| 2025-08-30T03:53:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:53:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
samairtimer/BengaluruSlang
|
samairtimer
| 2025-08-30T03:52:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T02:54:39Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: BengaluruSlang
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for BengaluruSlang
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samairtimer/BengaluruSlang", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756525696
|
bah63843
| 2025-08-30T03:49:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:48:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qgallouedec/Qwen3-1.7B-SFT-20250830032018
|
qgallouedec
| 2025-08-30T03:48:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"hf_jobs",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T03:21:09Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: Qwen3-1.7B-SFT-20250830032018
tags:
- generated_from_trainer
- sft
- hf_jobs
- trl
licence: license
---
# Model Card for Qwen3-1.7B-SFT-20250830032018
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-1.7B-SFT-20250830032018", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
elliotthwangmsa/gemma-3-270m-tw
|
elliotthwangmsa
| 2025-08-30T03:46:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T10:33:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756525445
|
bah63843
| 2025-08-30T03:44:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:44:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lowricolesadv/blockassist-bc-fluffy_furry_stork_1756523199
|
lowricolesadv
| 2025-08-30T03:41:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fluffy furry stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:41:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fluffy furry stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andryr/Pixelcopter-PLE-v0
|
andryr
| 2025-08-30T03:40:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-30T02:08:29Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.90 +/- 12.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ORIGINAL-VIDEO-DO-SURFISTA-VAZADO-VIDEOS/FULL.VIDEO.DO.SURFISTA.VAZADO.VIDEOS.LINK.ORIGINAL
|
ORIGINAL-VIDEO-DO-SURFISTA-VAZADO-VIDEOS
| 2025-08-30T03:39:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T03:39:10Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lyndathompsonad/blockassist-bc-crested_sly_duck_1756523100
|
lyndathompsonad
| 2025-08-30T03:38:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested sly duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:38:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested sly duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BathSalt-1/architechtransformer
|
BathSalt-1
| 2025-08-30T03:37:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-05-17T09:48:10Z |
---
license: apache-2.0
---
|
klmdr22/blockassist-bc-wild_loud_newt_1756524784
|
klmdr22
| 2025-08-30T03:33:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:33:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756522647
|
NahedDom
| 2025-08-30T03:32:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:32:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
QuantTrio/GLM-4.5-GPTQ-Int4-Int8Mix
|
QuantTrio
| 2025-08-30T03:31:30Z | 973 | 4 |
transformers
|
[
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"GPTQ",
"Int4-Int8Mix",
"量化修复",
"vLLM",
"conversational",
"base_model:zai-org/GLM-4.5",
"base_model:quantized:zai-org/GLM-4.5",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq_marlin",
"region:us"
] |
text-generation
| 2025-07-30T10:04:33Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- glm4_moe
- GPTQ
- Int4-Int8Mix
- 量化修复
- vLLM
base_model:
- zai-org/GLM-4.5
base_model_relation: quantized
---
# GLM-4.5-GPTQ-Int4-Int8Mix
Base model [zai-org/GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
### 【VLLM Launch Command for 8-GPU Single Node】
<i>Note: When launching this model on 8 GPUs, you must include --enable-expert-parallel, otherwise expert tensor partitioning will fail due to mismatch. This flag is not required for 4-GPU setups.</i>
```
CONTEXT_LENGTH=32768
vllm serve \
QuantTrio/GLM-4.5-GPTQ-Int4-Int8Mix \
--served-model-name GLM-4.5-GPTQ-Int4-Int8Mix \
--enable-expert-parallel \
--swap-space 16 \
--max-num-seqs 512 \
--max-model-len $CONTEXT_LENGTH \
--max-seq-len-to-capture $CONTEXT_LENGTH \
--gpu-memory-utilization 0.9 \
--tensor-parallel-size 8 \
--trust-remote-code \
--disable-log-requests \
--host 0.0.0.0 \
--port 8000
```
### 【Dependencies】
```
vllm==0.10.0
```
### 【Model Update】
```
2025-07-30
1. fast commit
```
### 【Model Files】
| File Size | Last Updated |
|---------|--------------|
| `192GB` | `2025-07-30` |
### 【Model Download】
```python
from huggingface_hub import snapshot_download
snapshot_download('QuantTrio/GLM-4.5-GPTQ-Int4-Int8Mix', cache_dir="your_local_path")
```
### 【Overview】
# GLM-4.5
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
<br>
📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>.
<br>
📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
<br>
👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
</p>
## Model Introduction
The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.

For more eval results, show cases, and technical details, please visit
our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon.
The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
## Quick Start
Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
|
motza0025/blockassist-bc-scurrying_waddling_pelican_1756523210
|
motza0025
| 2025-08-30T03:31:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying waddling pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:30:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying waddling pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ENLACES-VIDEO-INTIMO-DE-GENESIS-PENA/Nuevo.Genesis.Pena.Viral.filtrado.Video.Telegram.Republica.Dominicana
|
ENLACES-VIDEO-INTIMO-DE-GENESIS-PENA
| 2025-08-30T03:29:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T03:28:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
El caso de Génesis Peña, una joven dominicana de 21 años, ha sacudido a la comunidad de Villa González, en Santiago, República Dominicana. Tras denunciar haber sido drogada y agredida sexualmente por al menos seis hombres.
El hecho ha tomado mayor notoriedad porque los presuntos agresores grabaron y difundieron videos del ataque, los cuales hoy se han vuelto virales en redes sociales.
Familiares, vecinos y activistas piden justicia y rechazan la revictimización que supone la difusión del material, mientras las autoridades aún no emiten un comunicado oficial sobre avances en la investigación.
¿Qué ocurrió con Génesis Peña?
Según el testimonio de Génesis, ella acudió con una amiga a un centro de bebidas en Villa González. En el lugar, consumió alcohol, pero afirma que en uno de los vasos le habrían colocado alguna sustancia, ya que minutos después perdió el conocimiento.
Cuando despertó al día siguiente en su casa, comenzó a recibir mensajes y ver videos en redes sociales que evidenciaban el abuso sexual en su contra, supuestamente cometido por seis hombres. Ella ha identificado a cinco de los implicados, conocidos por apodos como Bebé, Guaro, Ferrere y Famsa.
Foto de los cinco de los seis abusadores de Genesis Peña
Ellos son cinco de los seis abusadores de Genesis Peña, quien solo los ha identificado. Foto: Cortesía.
Videos virales de Génesis Peña y consecuencias legales
Los videos del abuso de Génesis Peña se han difundido en plataformas como Telegram y redes sociales, lo que ha generado una fuerte indignación social. Usuarios denuncian que incluso hay personas compartiendo los enlaces sin ningún pudor, revictimizando a Génesis y afectando su integridad emocional.
En República Dominicana, la ley castiga el abuso sexual con penas de 10 a 20 años de prisión, que pueden aumentar si existen agravantes como la participación de varias personas, como es el caso.
Además, la difusión de material íntimo sin consentimiento también conlleva consecuencias legales.
Reacción de la comunidad y pedido de justicia
Vecinos del sector La Lomita, donde residen varios de los presuntos responsables, han salido a las calles para pedir justicia. Alegan que la joven fue "prácticamente drogada y abusada" por seis hombres y que las pruebas, incluidos los videos, deberían ser suficientes para emitir órdenes de arresto.
Hasta el momento, no hay confirmación oficial de detenciones ni avances judiciales, lo que ha incrementado la presión social para que la procuradora y la Policía Nacional actúen con rapidez.
Entrevista a Génesis Peña: su testimonio
En una entrevista reciente, Génesis narró con voz entrecortada lo poco que recuerda:
"Yo solo recuerdo que llegué con mi amiga al sitio, tomamos unas bebidas y después de un rato perdí el conocimiento. Me llevaron a un hospital, pero no recuerdo cómo llegué a mi casa. Me enteré de lo ocurrido por los videos en redes sociales".
La joven asegura que solo conocía de vista a algunos de los hombres y que nunca había compartido con ellos antes. Su principal pedido es que se investigue y se haga justicia para que el caso no quede impune.
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756523049
|
rvipitkirubbe
| 2025-08-30T03:29:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:29:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756524337
|
liukevin666
| 2025-08-30T03:26:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:26:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
charrywhite/LanPaint
|
charrywhite
| 2025-08-30T03:26:05Z | 0 | 0 |
comfyui-extension
|
[
"comfyui-extension",
"comfyui",
"inpainting",
"stable-diffusion",
"image-generation",
"computer-vision",
"image-to-image",
"en",
"arxiv:2502.03491",
"license:gpl-3.0",
"region:us"
] |
image-to-image
| 2025-08-30T02:49:09Z |
---
language:
- en
tags:
- comfyui
- inpainting
- stable-diffusion
- image-generation
- computer-vision
license: gpl-3.0
library_name: comfyui-extension
pipeline_tag: image-to-image
---
<div align="center">
# LanPaint: Universal Inpainting Sampler with "Think Mode"
[](https://arxiv.org/abs/2502.03491)
[](https://github.com/scraed/LanPaintBench)
[](https://github.com/comfyanonymous/ComfyUI)
[](https://scraed.github.io/scraedBlog/)
[](https://github.com/scraed/LanPaint/stargazers)
</div>
Universally applicable inpainting ability for every model. LanPaint sampler lets the model "think" through multiple iterations before denoising, enabling you to invest more computation time for superior inpainting quality.
This is the official implementation of ["Lanpaint: Training-Free Diffusion Inpainting with Exact and Fast Conditional Inference"](https://arxiv.org/abs/2502.03491). The repository is for ComfyUI extension. Local Python benchmark code is published here: [LanPaintBench](https://github.com/scraed/LanPaintBench).

Check [Mased Qwen Edit Workflow](https://github.com/scraed/LanPaint/tree/master/examples/Example_14). You need to follow the ComfyUI version of [Qwen Image Edit workflow](https://docs.comfy.org/tutorials/image/qwen/qwen-image-edit) to download and install the model.

Also check [Qwen Inpaint Workflow](https://github.com/scraed/LanPaint/tree/master/examples/Example_13) and [Qwen Outpaint Workflow](https://github.com/scraed/LanPaint/tree/master/examples/Example_12). You need to follow the ComfyUI version of [Qwen Image workflow](https://docs.comfy.org/tutorials/image/qwen/qwen-image) to download and install the model.
## Table of Contents
- [Features](#features)
- [Quickstart](#quickstart)
- [How to Use Examples](#how-to-use-examples)
- [Examples](#examples)
- [Qwen Image](#example-qwen-image-inpaintlanpaint-k-sampler-5-steps-of-thinking)
- [HiDream](#example-hidream-inpaint-lanpaint-k-sampler-5-steps-of-thinking)
- [SD 3.5](#example-sd-35-inpaintlanpaint-k-sampler-5-steps-of-thinking)
- [Flux](#example-flux-inpaintlanpaint-k-sampler-5-steps-of-thinking)
- [SDXL Examples](#example-sdxl-0-character-consistency-side-view-generation-lanpaint-k-sampler-5-steps-of-thinking)
- [Usage](#usage)
- [Basic Sampler](#basic-sampler)
- [Advanced Sampler](#lanpaint-ksampler-advanced)
- [Tuning Guide](#lanpaint-ksampler-advanced-tuning-guide)
- [Community Showcase](#community-showcase-)
- [Updates](#updates)
- [ToDo](#todo)
- [Citation](#citation)
## Features
- **Universal Compatibility** – Works instantly with almost any model (**SD 1.5, XL, 3.5, Flux, HiDream, Qwen-Image or custom LoRAs**) and ControlNet.

- **No Training Needed** – Works out of the box with your existing model.
- **Easy to Use** – Same workflow as standard ComfyUI KSampler.
- **Flexible Masking** – Supports any mask shape, size, or position for inpainting/outpainting.
- **No Workarounds** – Generates 100% new content (no blending or smoothing) without relying on partial denoising.
- **Beyond Inpainting** – You can even use it as a simple way to generate consistent characters.
**Warning**: LanPaint has degraded performance on distillation models, such as Flux.dev, due to a similar [issue with LORA training](https://medium.com/@zhiwangshi28/why-flux-lora-so-hard-to-train-and-how-to-overcome-it-a0c70bc59eaf). Please use low flux guidance (1.0-2.0) to mitigate this [issue](https://github.com/scraed/LanPaint/issues/30).
## Quickstart
1. **Install ComfyUI**: Follow the official [ComfyUI installation guide](https://docs.comfy.org/get_started) to set up ComfyUI on your system. Or ensure your ComfyUI version > 0.3.11.
2. **Install ComfyUI-Manager**: Add the [ComfyUI-Manager](https://github.com/ltdrdata/ComfyUI-Manager) for easy extension management.
3. **Install LanPaint Nodes**:
- **Via ComfyUI-Manager**: Search for "[LanPaint](https://registry.comfy.org/publishers/scraed/nodes/LanPaint)" in the manager and install it directly.
- **Manually**: Click "Install via Git URL" in ComfyUI-Manager and input the GitHub repository link:
```
https://github.com/scraed/LanPaint.git
```
Alternatively, clone this repository into the `ComfyUI/custom_nodes` folder.
4. **Restart ComfyUI**: Restart ComfyUI to load the LanPaint nodes.
Once installed, you'll find the LanPaint nodes under the "sampling" category in ComfyUI. Use them just like the default KSampler for high-quality inpainting!
## **How to Use Examples:**
1. Navigate to the **example** folder (i.e example_1), download all pictures.
2. Drag **InPainted_Drag_Me_to_ComfyUI.png** into ComfyUI to load the workflow.
3. Download the required model (i.e clicking **Model Used in This Example**).
4. Load the model in ComfyUI.
5. Upload **Masked_Load_Me_in_Loader.png** to the **"Load image"** node in the **"Mask image for inpainting"** group (second from left), or the **Prepare Image** node.
7. Queue the task, you will get inpainted results from LanPaint. Some example also gives you inpainted results from the following methods for comparison:
- **[VAE Encode for Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/)**
- **[Set Latent Noise Mask](https://comfyui-wiki.com/en/tutorial/basic/how-to-inpaint-an-image-in-comfyui)**
## Examples
### Example Qwen Image: InPaint(LanPaint K Sampler, 5 steps of thinking)
We are excited to announce that LanPaint now supports Qwen Image, providing powerful inpainting capabilities for image editing.

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_11)
You need to follow the ComfyUI version of [Qwen Image workflow](https://docs.comfy.org/tutorials/image/qwen/qwen-image) to download and install the model.
The following examples utilize a random seed of 0 to generate a batch of 4 images for variance demonstration and fair comparison. (Note: Generating 4 images may exceed your GPU memory; please adjust the batch size as necessary.)
### Example HiDream: InPaint (LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_8)
You need to follow the ComfyUI version of [HiDream workflow](https://docs.comfy.org/tutorials/image/hidream/hidream-i1) to download and install the model.
### Example HiDream: OutPaint(LanPaint K Sampler, 5 steps of thinking)
.jpg?raw=true)
[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_10)
You need to follow the ComfyUI version of [HiDream workflow](https://docs.comfy.org/tutorials/image/hidream/hidream-i1) to download and install the model. Thanks [Amazon90](https://github.com/Amazon90) for providing this example.
### Example SD 3.5: InPaint(LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_9)
You need to follow the ComfyUI version of [SD 3.5 workflow](https://comfyui-wiki.com/en/tutorial/advanced/stable-diffusion-3-5-comfyui-workflow) to download and install the model.
### Example Flux: InPaint(LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_7)
[Model Used in This Example](https://huggingface.co/Comfy-Org/flux1-dev/blob/main/flux1-dev-fp8.safetensors)
(Note: Prompt First mode is disabled on Flux. As it does not use CFG guidance.)
### Example SDXL 0: Character Consistency (Side View Generation) (LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_6)
[Model Used in This Example](https://civitai.com/models/1188071?modelVersionId=1408658)
(Tricks 1: You can emphasize the character by copy it's image multiple times with Photoshop. Here I have made one extra copy.)
(Tricks 2: Use prompts like multiple views, multiple angles, clone, turnaround. Use LanPaint's Prompt first mode (does not support Flux))
(Tricks 3: Remeber LanPaint can in-paint: Mask non-consistent regions and try again!)
### Example SDXL 1: Basket to Basket Ball (LanPaint K Sampler, 2 steps of thinking).

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_1)
[Model Used in This Example](https://civitai.com/models/1188071?modelVersionId=1408658)
### Example SDXL 2: White Shirt to Blue Shirt (LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_2)
[Model Used in This Example](https://civitai.com/models/1188071?modelVersionId=1408658)
### Example SDXL 3: Smile to Sad (LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_3)
[Model Used in This Example](https://civitai.com/models/133005/juggernaut-xl)
### Example SDXL 4: Damage Restoration (LanPaint K Sampler, 5 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_4)
[Model Used in This Example](https://civitai.com/models/133005/juggernaut-xl)
### Example SDXL 5: Huge Damage Restoration (LanPaint K Sampler, 20 steps of thinking)

[View Workflow & Masks](https://github.com/scraed/LanPaint/tree/master/examples/Example_5)
[Model Used in This Example](https://civitai.com/models/133005/juggernaut-xl)
Check more for use cases like inpaint on [fine tuned models](https://github.com/scraed/LanPaint/issues/12#issuecomment-2938662021) and [face swapping](https://github.com/scraed/LanPaint/issues/12#issuecomment-2938723501), thanks to [Amazon90](https://github.com/Amazon90).
## Usage
**Workflow Setup**
Same as default ComfyUI KSampler - simply replace with LanPaint KSampler nodes. The inpainting workflow is the same as the [SetLatentNoiseMask](https://comfyui-wiki.com/zh/comfyui-nodes/latent/inpaint/set-latent-noise-mask) inpainting workflow.
**Note**
- LanPaint requires binary masks (values of 0 or 1) without opacity or smoothing. To ensure compatibility, set the mask's **opacity and hardness to maximum** in your mask editor. During inpainting, any mask with smoothing or gradients will automatically be converted to a binary mask.
- LanPaint relies heavily on your text prompts to guide inpainting - explicitly describe the content you want generated in the masked area. If results show artifacts or mismatched elements, counteract them with targeted negative prompts.
## Basic Sampler

- LanPaint KSampler: The most basic and easy to use sampler for inpainting.
- LanPaint KSampler (Advanced): Full control of all parameters.
### LanPaint KSampler
Simplified interface with recommended defaults:
- Steps: 20 - 50. More steps will give more "thinking" and better results.
- LanPaint NumSteps: The turns of thinking before denoising. Recommend 5 for most of tasks ( which means 5 times slower than sampling without thinking). Use 10 for more challenging tasks.
- LanPaint Prompt mode: Image First mode and Prompt First mode. Image First mode focuses on the image, inpaint based on image context (maybe ignore prompt), while Prompt First mode focuses more on the prompt. Use Prompt First mode for tasks like character consistency. (Technically, it Prompt First mode change CFG scale to negative value in the BIG score to emphasis prompt, which will costs image quality.)
### LanPaint KSampler (Advanced)
Full parameter control:
**Key Parameters**
| Parameter | Range | Description |
|-----------|-------|-------------|
| `Steps` | 0-100 | Total steps of diffusion sampling. Higher means better inpainting. Recommend 20-50. |
| `LanPaint_NumSteps` | 0-20 | Reasoning iterations per denoising step ("thinking depth"). Easy task: 2-5. Hard task: 5-10 |
| `LanPaint_Lambda` | 0.1-50 | Content alignment strength (higher = stricter). Recommend 4.0 - 10.0 |
| `LanPaint_StepSize` | 0.1-1.0 | The StepSize of each thinking step. Recommend 0.1-0.5. |
| `LanPaint_Beta` | 0.1-2.0 | The StepSize ratio between masked / unmasked region. Small value can compensate high lambda values. Recommend 1.0 |
| `LanPaint_Friction` | 0.0-100.0 | The friction of Langevin dynamics. Higher means more slow but stable, lower means fast but unstable. Recommend 10.0 - 20.0|
| `LanPaint_EarlyStop` | 0-10 | Stop LanPaint iteration before the final sampling step. Helps to remove artifacts in some cases. Recommend 1-5|
| `LanPaint_PromptMode` | Image First / Prompt First | Image First mode focuses on the image context, maybe ignore prompt. Prompt First mode focuses more on the prompt. |
For detailed descriptions of each parameter, simply hover your mouse over the corresponding input field to view tooltips with additional information.
### LanPaint Mask Blend
This node blends the original image with the inpainted image based on the mask. It is useful if you want the unmasked region to match the original image pixel perfectly.
## LanPaint KSampler (Advanced) Tuning Guide
For challenging inpainting tasks:
1️⃣ **Boost Quality**
Increase **total number of sampling steps** (very important!), **LanPaint_NumSteps** (thinking iterations) or **LanPaint_Lambda** if the inpainted result does not meet your expectations.
2️⃣ **Boost Speed**
Decrease **LanPaint_NumSteps** to accelerate generation! If you want better results but still need fewer steps, consider:
- **Increasing LanPaint_StepSize** to speed up the thinking process.
- **Decreasing LanPaint_Friction** to make the Langevin dynamics converges more faster.
3️⃣ **Fix Unstability**:
If you find the results have wired texture, try
- Reduce **LanPaint_Friction** to make the Langevin dynamics more stable.
- Reduce **LanPaint_StepSize** to use smaller step size.
- Reduce **LanPaint_Beta** if you are using a high lambda value.
⚠️ **Notes**:
- For effective tuning, **fix the seed** and adjust parameters incrementally while observing the results. This helps isolate the impact of each setting. Better to do it with a batche of images to avoid overfitting on a single image.
## Community Showcase [](#community-showcase-)
Discover how the community is using LanPaint! Here are some user-created tutorials:
- [Ai绘画进阶148-三大王炸!庆祝高允贞出道6周年!T8即将直播?当AI绘画学会深度思考?!万能修复神器LanPaint,万物皆可修!-T8 Comfyui教程](https://www.youtube.com/watch?v=Z4DSTv3UPJo)
- [Ai绘画进阶151-真相了!T8竟是个AI?!LanPaint进阶(二),人物一致性,多视角实验性测试,新参数讲解,工作流分享-T8 Comfyui教程](https://www.youtube.com/watch?v=landiRhvF3k)
- [重绘和三视图角色一致性解决新方案!LanPaint节点尝试](https://www.youtube.com/watch?v=X0WbXdm6FA0)
- [ComfyUI: HiDream with Perturbation Upscale, LanPaint Inpainting (Workflow Tutorial)](https://www.youtube.com/watch?v=2-mGe4QVIIw&t=2785s)
- [ComfyUI必备LanPaint插件超详细使用教程](https://plugin.aix.ink/archives/lanpaint)
Submit a PR to add your tutorial/video here, or open an [Issue](https://github.com/scraed/LanPaint/issues) with details!
## Updates
- 2025/08/08
- Add Qwen image support
- 2025/06/21
- Update the algorithm with enhanced stability and outpaint performance.
- Add outpaint example
- Supports Sampler Custom (Thanks to [MINENEMA](https://github.com/MINENEMA))
- 2025/06/04
- Add more sampler support.
- Add early stopping to advanced sampler.
- 2025/05/28
- Major update on the Langevin solver. It is now much faster and more stable.
- Greatly simplified the parameters for advanced sampler.
- Fix performance issue on Flux and SD 3.5
- 2025/04/16
- Added Primary HiDream support
- 2025/03/22
- Added Primary Flux support
- Added Tease Mode
- 2025/03/10
- LanPaint has received a major update! All examples now use the LanPaint K Sampler, offering a simplified interface with enhanced performance and stability.
- 2025/03/06:
- Bug Fix for str not callable error and unpack error. Big thanks to [jamesWalker55](https://github.com/jamesWalker55) and [EricBCoding](https://github.com/EricBCoding).
## ToDo
- Try Implement Detailer
- ~~Provide inference code on without GUI.~~ Check our local Python benchmark code [LanPaintBench](https://github.com/scraed/LanPaintBench).
## Citation
```
@misc{zheng2025lanpainttrainingfreediffusioninpainting,
title={Lanpaint: Training-Free Diffusion Inpainting with Exact and Fast Conditional Inference},
author={Candi Zheng and Yuan Lan and Yang Wang},
year={2025},
eprint={2502.03491},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2502.03491},
}
```
|
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF
|
mradermacher
| 2025-08-30T03:23:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification",
"base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T03:14:00Z |
---
base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mimir-mistral-500m-core-scratch-GGUF
|
mradermacher
| 2025-08-30T03:21:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NbAiLab/mimir-mistral-500m-core-scratch",
"base_model:quantized:NbAiLab/mimir-mistral-500m-core-scratch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T03:12:01Z |
---
base_model: NbAiLab/mimir-mistral-500m-core-scratch
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/NbAiLab/mimir-mistral-500m-core-scratch
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mimir-mistral-500m-core-scratch-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mimir-mistral-500m-core-scratch-GGUF/resolve/main/mimir-mistral-500m-core-scratch.f16.gguf) | f16 | 1.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BootesVoid/cmexleien05k3sr53jpx62got_cmexnn3o105ngsr53ns35y2dh
|
BootesVoid
| 2025-08-30T03:21:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-30T03:21:05Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BELLA
---
# Cmexleien05K3Sr53Jpx62Got_Cmexnn3O105Ngsr53Ns35Y2Dh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BELLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BELLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmexleien05k3sr53jpx62got_cmexnn3o105ngsr53ns35y2dh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmexleien05k3sr53jpx62got_cmexnn3o105ngsr53ns35y2dh', weight_name='lora.safetensors')
image = pipeline('BELLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmexleien05k3sr53jpx62got_cmexnn3o105ngsr53ns35y2dh/discussions) to add images that show off what you’ve made with this LoRA.
|
klmdr22/blockassist-bc-wild_loud_newt_1756523893
|
klmdr22
| 2025-08-30T03:18:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:18:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_234
|
AnonymousCS
| 2025-08-30T03:18:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_xlmr_large",
"base_model:finetune:AnonymousCS/populism_xlmr_large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-30T03:13:04Z |
---
library_name: transformers
license: mit
base_model: AnonymousCS/populism_xlmr_large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_234
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_234
This model is a fine-tuned version of [AnonymousCS/populism_xlmr_large](https://huggingface.co/AnonymousCS/populism_xlmr_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Accuracy: 0.9371
- 1-f1: 0.0
- 1-recall: 0.0
- 1-precision: 0.0
- Balanced Acc: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:|
| 0.0252 | 1.0 | 124 | 0.2826 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.1568 | 2.0 | 248 | 0.2368 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0719 | 3.0 | 372 | 0.2349 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.1553 | 4.0 | 496 | 0.2655 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.4661 | 5.0 | 620 | 0.2487 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0348 | 6.0 | 744 | 0.2525 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.5 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
arianaazarbal/standard_tpr_0.65-20250823_060848_grpo_20250830_031518-policy-adapter
|
arianaazarbal
| 2025-08-30T03:16:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T03:16:06Z |
# Policy Model LoRA Adapter (GRPO/DPO)
Experiment: standard_tpr_0.65
Timestamp: 20250823_060848_grpo_20250830_031518
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Policy Model LoRA Adapter (GRPO/DPO)
- **Experiment Name**: standard_tpr_0.65
- **Training Timestamp**: 20250823_060848_grpo_20250830_031518
|
bah63843/blockassist-bc-plump_fast_antelope_1756523639
|
bah63843
| 2025-08-30T03:14:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:14:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vinooj/Llama-3.2-3B-ascii-cats-lora-q4_k_m-GGUF
|
vinooj
| 2025-08-30T03:13:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T03:12:29Z |
---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vinooj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756521945
|
vwzyrraz7l
| 2025-08-30T03:10:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:10:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756521899
|
Loder-S
| 2025-08-30T03:08:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:08:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756523178
|
klmdr22
| 2025-08-30T03:07:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:06:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756522992
|
bah63843
| 2025-08-30T03:04:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:03:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-dextrous_striped_ant_1756521376
|
motza0025
| 2025-08-30T03:02:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous striped ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:01:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous striped ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crislmfroes/svla-panda-open-base-cabinet-sim-v24
|
crislmfroes
| 2025-08-30T03:01:08Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:crislmfroes/panda-open-base-cabinet-sim-v24",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-30T03:00:55Z |
---
base_model: lerobot/smolvla_base
datasets: crislmfroes/panda-open-base-cabinet-sim-v24
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756522359
|
liukevin666
| 2025-08-30T02:57:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:53:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nimmytio/blockassist-bc-huge_tawny_wasp_1756522517
|
nimmytio
| 2025-08-30T02:55:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge tawny wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:55:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge tawny wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_230
|
AnonymousCS
| 2025-08-30T02:51:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_xlmr_large",
"base_model:finetune:AnonymousCS/populism_xlmr_large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-26T08:39:57Z |
---
library_name: transformers
license: mit
base_model: AnonymousCS/populism_xlmr_large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_230
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_230
This model is a fine-tuned version of [AnonymousCS/populism_xlmr_large](https://huggingface.co/AnonymousCS/populism_xlmr_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2072
- Accuracy: 0.9581
- 1-f1: 0.0
- 1-recall: 0.0
- 1-precision: 0.0
- Balanced Acc: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:|
| 0.4109 | 1.0 | 96 | 0.2241 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0126 | 2.0 | 192 | 0.2105 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.3376 | 3.0 | 288 | 0.2040 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0187 | 4.0 | 384 | 0.1921 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.3191 | 5.0 | 480 | 0.2003 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.173 | 6.0 | 576 | 0.2228 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0172 | 7.0 | 672 | 0.2072 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756520067
|
NahedDom
| 2025-08-30T02:50:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:50:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756522104
|
klmdr22
| 2025-08-30T02:49:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:49:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft
|
RikiyaT
| 2025-08-30T02:47:47Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"region:us"
] | null | 2025-08-30T02:47:42Z |
# RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft
Dense retrieval encoder (Ettin / ModernBERT) — Transformers
- Base model: RikiyaT/mxbai-ettin-17m-pretrained
- Pooling: mean
- Projection: **identity** (dim=256)
**SentenceTransformers variant**: [RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft-st)
### Usage
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-arxiv-1.4m-phaseA-ft", trust_remote_code=True)
# identity projection
def encode(texts, prompt="search_query: "):
x = tokenizer([prompt + t for t in texts], padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = model(**x).last_hidden_state
mask = x["attention_mask"][..., None].bool()
emb = (out.masked_fill(~mask, 0.0).sum(1) / x["attention_mask"].sum(1, keepdim=True))
emb = torch.nn.functional.normalize(emb, p=2, dim=1)
return emb
```
Prompts used in training:
- query: `search_query: {text}`
- document: `search_document: {text}`
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756520250
|
pempekmangedd
| 2025-08-30T02:41:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:41:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
honmik/blockassist-bc-patterned_howling_salamander_1756521410
|
honmik
| 2025-08-30T02:37:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned howling salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:37:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned howling salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
multimodalart/tarotcard_poli_now_goes-lora
|
multimodalart
| 2025-08-30T02:34:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"qwen-image",
"qwen-image-diffusers",
"template:sd-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-30T01:56:06Z |
---
base_model: Qwen/Qwen-Image
library_name: diffusers
license: apache-2.0
instance_prompt: a trtcrd of
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- qwen-image
- qwen-image-diffusers
- template:sd-lora
---
|
qgallouedec/Qwen3-0.6B-SFT-20250830022956
|
qgallouedec
| 2025-08-30T02:31:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T02:30:44Z |
---
base_model: Qwen/Qwen3-0.6B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-0.6B-SFT-20250830022956
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-0.6B-SFT-20250830022956
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-0.6B-SFT-20250830022956", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756520953
|
bah63843
| 2025-08-30T02:30:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:29:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rindartog/AceInstruct-1.5B-Gensyn-Swarm-foraging_dextrous_tortoise
|
rindartog
| 2025-08-30T02:28:55Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am foraging_dextrous_tortoise",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T00:48:46Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am foraging_dextrous_tortoise
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Soughing/MLRA
|
Soughing
| 2025-08-30T02:26:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T17:59:11Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.