modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 00:41:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 00:41:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qownscks/banana_hand_to_hand
|
qownscks
| 2025-09-08T20:08:01Z | 24 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:qownscks/banana_hand_to_hand",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-06T13:54:47Z |
---
base_model: lerobot/smolvla_base
datasets: qownscks/banana_hand_to_hand
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
maukluchoda/blockassist-bc-placid_stinky_buffalo_1757362057
|
maukluchoda
| 2025-09-08T20:07:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid stinky buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid stinky buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
b9504148/blockassist-bc-thorny_whiskered_opossum_1757362027
|
b9504148
| 2025-09-08T20:07:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny whiskered opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:07:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny whiskered opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
craftcore/lightning
|
craftcore
| 2025-09-08T20:07:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T19:43:24Z |
---
license: apache-2.0
---
|
arvisom516/blockassist-bc-marine_tough_hornet_1757361969
|
arvisom516
| 2025-09-08T20:06:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine tough hornet",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:06:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine tough hornet
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Paradoxis/Qwen2.5-VL-3B-Instruct-GRPO
|
Paradoxis
| 2025-09-08T20:05:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"hf_jobs",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T09:34:18Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: Qwen2.5-VL-3B-Instruct-GRPO
tags:
- generated_from_trainer
- grpo
- hf_jobs
- trl
licence: license
---
# Model Card for Qwen2.5-VL-3B-Instruct-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Paradoxis/Qwen2.5-VL-3B-Instruct-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flofiz-universit-de-bourgogne/GRPO/runs/mty0pdyc)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0.dev0
- Transformers: 4.56.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/wizard-s-spellbook-taped-faces
|
Muapi
| 2025-09-08T20:05:00Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-08T20:04:48Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wizard's Spellbook: Taped Faces

**Base model**: Flux.1 D
**Trained words**: s3ll0t4p3, s3ll0t4p3 closeup photograph
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1144079@1286742", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
goshujaieja/blockassist-bc-untamed_armored_ram_1757361824
|
goshujaieja
| 2025-09-08T20:04:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed armored ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:04:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed armored ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tewsharlesau/blockassist-bc-nasty_hibernating_rabbit_1757361797
|
tewsharlesau
| 2025-09-08T20:03:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nasty hibernating rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:03:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nasty hibernating rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cesarcosentino/blockassist-bc-colorful_sturdy_anteater_1757361773
|
cesarcosentino
| 2025-09-08T20:03:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful sturdy anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:02:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful sturdy anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cyprogabellivari/blockassist-bc-singing_territorial_cod_1757361748
|
cyprogabellivari
| 2025-09-08T20:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing territorial cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:02:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing territorial cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cddoan/phishingAI
|
cddoan
| 2025-09-08T20:02:27Z | 2 | 0 | null |
[
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-02T22:07:49Z |
---
license: mit
---
## Adding the model to Ollama
- Install the model, the model file and Ollama
- Run: ollama create phishingAI -f modelFile
|
Muapi/dalcefo_flux1.dev-kasuki
|
Muapi
| 2025-09-08T20:02:25Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-08T20:01:14Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dalcefo_Flux1.Dev-Kasuki

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:677854@758780", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
abadkibriya3524/blockassist-bc-timid_padded_ape_1757361717
|
abadkibriya3524
| 2025-09-08T20:02:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid padded ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:02:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid padded ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tauteolifrancis/blockassist-bc-nocturnal_vicious_buffalo_1757361622
|
tauteolifrancis
| 2025-09-08T20:00:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal vicious buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T20:00:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal vicious buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757361525
|
bah63843
| 2025-09-08T19:59:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:59:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Karthikappi0011/gemma-3-finetuned-v0.1-supervised_data
|
Karthikappi0011
| 2025-09-08T19:58:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T19:56:37Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Karthikappi0011
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yandjaynejenei/blockassist-bc-hairy_shiny_hyena_1757361486
|
yandjaynejenei
| 2025-09-08T19:58:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy shiny hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:58:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy shiny hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ghost613/VC-MJY_Woman_40s-0_preprocessed-12
|
ghost613
| 2025-09-08T19:56:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-06T08:09:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aidan-ucc/LoRA-qwen2.5VL7b-3900-eco
|
aidan-ucc
| 2025-09-08T19:54:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-08T19:38:25Z |
---
base_model: unsloth/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aidan-ucc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bah63843/blockassist-bc-plump_fast_antelope_1757361131
|
bah63843
| 2025-09-08T19:53:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:52:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bingbangboom/QwenPhil
|
bingbangboom
| 2025-09-08T19:52:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T19:52:37Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bingbangboom
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GigaGabe/vit_base-oxford-iiit-pets
|
GigaGabe
| 2025-09-08T19:52:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-09-08T17:45:26Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit_base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2044
- Accuracy: 0.9445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3791 | 1.0 | 370 | 0.2934 | 0.9296 |
| 0.2011 | 2.0 | 740 | 0.2223 | 0.9364 |
| 0.1679 | 3.0 | 1110 | 0.2024 | 0.9364 |
| 0.1518 | 4.0 | 1480 | 0.1935 | 0.9391 |
| 0.1355 | 5.0 | 1850 | 0.1911 | 0.9418 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
silverbenehi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo
|
silverbenehi
| 2025-09-08T19:49:25Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bold running kangaroo",
"trl",
"genrl-swarm",
"I am bold_running_kangaroo",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-09T21:11:49Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bold running kangaroo
- trl
- genrl-swarm
- I am bold_running_kangaroo
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silverbenehi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
palmart111/blockassist-bc-armored_feline_capybara_1757360916
|
palmart111
| 2025-09-08T19:49:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:49:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1757358661
|
NahedDom
| 2025-09-08T19:48:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:48:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF
|
straino
| 2025-09-08T19:47:56Z | 0 | 0 | null |
[
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-09-08T19:47:44Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
base_model: microsoft/Phi-3-mini-128k-instruct
---
# straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo straino/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -c 2048
```
|
sekirr/blockassist-bc-masked_tenacious_whale_1757360827
|
sekirr
| 2025-09-08T19:47:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:47:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FluidInference/coreml-kokoro
|
FluidInference
| 2025-09-08T19:47:22Z | 0 | 0 | null |
[
"coreml",
"region:us"
] | null | 2025-09-08T06:19:05Z |
---
license: apache-2.0
language:
- en
base_model:
- hexgrad/Kokoro-82M
pipeline_tag: text-to-speech
---
Based on the original kokoro model, see https://github.com/FluidInference/FluidAudio for inference
|
Coolboi0099/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_rangy_porcupine
|
Coolboi0099
| 2025-09-08T19:47:07Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tall_rangy_porcupine",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T05:06:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tall_rangy_porcupine
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757359259
|
seams01
| 2025-09-08T19:46:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:46:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IcosaComputingHF/unlu_oss20b_HF
|
IcosaComputingHF
| 2025-09-08T19:45:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T19:43:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1757360684
|
liukevin666
| 2025-09-08T19:45:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:45:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lavonihak/blockassist-bc-twitchy_lively_mosquito_1757360623
|
lavonihak
| 2025-09-08T19:43:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy lively mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:43:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy lively mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
caseboltvernie/blockassist-bc-quick_lazy_whale_1757360457
|
caseboltvernie
| 2025-09-08T19:41:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick lazy whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:41:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick lazy whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prolinkmoon/blockassist-bc-rabid_scaly_anteater_1757360335
|
prolinkmoon
| 2025-09-08T19:40:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid scaly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T19:39:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid scaly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shamith/babyLlama-TinyStories
|
shamith
| 2025-09-08T19:17:42Z | 6 | 0 |
transformers
|
[
"transformers",
"PyTorch",
"text-generation",
"en",
"dataset:roneneldan/TinyStories",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-06T18:55:48Z |
---
license: mit
datasets:
- roneneldan/TinyStories
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- PyTorch
---
# Model Card for babyLlama-TinyStories
The goal for training this model is to see if a tiny llm can generate coherent text/stories when trained a tiny subset of TinyStories, in this case just 1% of the total training data.
I was able to locally train a small LLM, babyLlama based on Llama2 architecture, with close to 0.8M parameters and found it was able to generate coherent stories to an extent.
## Limitations
- This model was trained with a context length of 64 so that it can be trained quickly on a laptop
- This model uses a custom tokenizer specifically trained on TinyStories with a vocab size of 8192
## Quick start
```python
!git lfs install
!git clone https://huggingface.co/shamith/babyLlama-TinyStories and cd babyLlama-TinyStories
!git checkout "0.8M"
import torch
from transformers import AutoTokenizer
from configuration_babyllama import BabyLlamaConfig
from modeling_babyllama import BabyLlamaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shamith/babyLlama-TinyStories", revision="0.8M")
config = BabyLlamaConfig(max_seq_len=128)
model = BabyLlamaForCausalLM(config)
model.load_state_dict(torch.load("pytorch_model.bin", weights_only=True))
model.eval()
prompt = "Once upon a time"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_k=60, repetition_penalty=1.1)
output = tokenizer.decode(output[0], skip_special_tokens=True)
print(output)
# Output 1: Once upon a time, there was a little girl named Lily. She loved to play outside with her friends and run around the garden. One day, she found an interesting mushroom that was very pretty. She wanted to find it up, but it wasn't nice to break it. Lily was sad because she didn't know what to do. But then, her mom came over and saw the microscope. She said, "Lily, you are so kind of the mushroom." Lily thought it was a good spot in
# Output 2: Once upon a time there was a little girl named Lucy. She had a very special toy that she loved to play with. Every day she would play outside and see some interesting things in the park. One day, Lucy saw a big black ball in the park. She wanted to play with it, so she started running around with the ball. She ran around and laughed and played with it all by herself. Suddenly, Lucy heard a noise coming from the mailbox. It was a big, ugly cat! She was feeling
```
## Training procedure
- revision: 0.8M
- Trained on M1 Max iGPU for 6 epochs
- Trained on 4,491,878 tokens, Validated on 44,357 examples. Each training epoch took around 55 minutes on average
- [0.8M params, dtype: float32, batch size: 64]: The training takes about 2-2.5 GB of memory
- Training script: train.ipynb
### Framework versions
- torch: 2.5.1
- transformers: 4.48.0
- datasets: 3.2.0
- tokenizers: 0.21.4
- sentencepiece: 0.2.0
|
dwoprer/blockassist-bc-bipedal_flapping_anaconda_1757356191
|
dwoprer
| 2025-09-08T18:30:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal flapping anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:29:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal flapping anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ElRompeAnosFullAnal/ElRompeAnosFullAnal
|
ElRompeAnosFullAnal
| 2025-09-08T18:30:02Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-31T22:45:18Z |
---
license: cc-by-nc-4.0
---
|
tjsvdicfaslism/blockassist-bc-keen_bellowing_crocodile_1757356186
|
tjsvdicfaslism
| 2025-09-08T18:29:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen bellowing crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:29:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen bellowing crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757356048
|
bah63843
| 2025-09-08T18:28:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:28:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boomeryop/blockassist-bc-screeching_pawing_wallaby_1757356035
|
boomeryop
| 2025-09-08T18:27:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching pawing wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:27:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching pawing wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF
|
andrewwentzel-epsilon
| 2025-09-08T18:27:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"bco",
"llama-cpp",
"gguf-my-repo",
"base_model:andrewwentzel-epsilon/ttp-qwen",
"base_model:quantized:andrewwentzel-epsilon/ttp-qwen",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T18:27:30Z |
---
library_name: transformers
tags:
- trl
- bco
- llama-cpp
- gguf-my-repo
base_model: andrewwentzel-epsilon/ttp-qwen
---
# andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF
This model was converted to GGUF format from [`andrewwentzel-epsilon/ttp-qwen`](https://huggingface.co/andrewwentzel-epsilon/ttp-qwen) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/andrewwentzel-epsilon/ttp-qwen) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF --hf-file ttp-qwen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF --hf-file ttp-qwen-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF --hf-file ttp-qwen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo andrewwentzel-epsilon/ttp-qwen-Q4_K_M-GGUF --hf-file ttp-qwen-q4_k_m.gguf -c 2048
```
|
daliakaineroxie/blockassist-bc-miniature_flightless_caribou_1757356031
|
daliakaineroxie
| 2025-09-08T18:27:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature flightless caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:27:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature flightless caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Comfy-Org/hunyuan3D_2.1_repackaged
|
Comfy-Org
| 2025-09-08T18:26:28Z | 0 | 2 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"region:us"
] | null | 2025-09-05T02:20:34Z |
---
tags:
- diffusion-single-file
- comfyui
---
|
ahumadaxhg/blockassist-bc-alert_spotted_dolphin_1757355970
|
ahumadaxhg
| 2025-09-08T18:26:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert spotted dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:26:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert spotted dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Benbass1991/DeepSeek-R1-Qwen3-8B-ToT-Merged
|
Benbass1991
| 2025-09-08T18:25:46Z | 6 | 0 | null |
[
"safetensors",
"qwen3",
"tree",
"of",
"thought",
"Tot",
"thinking",
"en",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"region:us"
] | null | 2025-09-04T22:56:13Z |
---
license: mit
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
tags:
- tree
- of
- thought
- Tot
- thinking
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
This is a first attempt to fine tune a 8B paramater deepseek r1 model for tree of thought reasoning. It was fine-tuned using the dataset terrycraddock/Tree_Of_Thoughts_BASE_24k.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
terrycraddock/Tree_Of_Thoughts_BASE_24k
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harfertwinston/blockassist-bc-hibernating_quick_dinosaur_1757355830
|
harfertwinston
| 2025-09-08T18:24:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hibernating quick dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:24:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hibernating quick dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1757355739
|
liukevin666
| 2025-09-08T18:23:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:23:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mcbridepollakdq/blockassist-bc-armored_cunning_armadillo_1757355805
|
mcbridepollakdq
| 2025-09-08T18:23:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored cunning armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:23:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored cunning armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hash-map/telugu_english_tokenizers
|
hash-map
| 2025-09-08T18:23:11Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"en",
"te",
"license:mit",
"region:us"
] | null | 2025-09-08T18:19:50Z |
---
license: mit
language:
- en
- te
library_name: sentence-transformers
---
|
0xnu/european-license-plate-recognition
|
0xnu
| 2025-09-08T18:23:05Z | 305 | 0 | null |
[
"onnx",
"YOLOv12n",
"eu",
"european-union",
"transport",
"transportation",
"computer-vision",
"object-detection",
"license-plate-recognition",
"ocr",
"en",
"de",
"fr",
"es",
"it",
"nl",
"dataset:0xnu/european-licence-plate",
"doi:10.57967/hf/6297",
"license:mit",
"region:us"
] |
object-detection
| 2025-08-16T20:22:52Z |
---
license: mit
datasets:
- 0xnu/european-licence-plate
tags:
- eu
- european-union
- transport
- transportation
- computer-vision
- object-detection
- license-plate-recognition
- ocr
language:
- en
- de
- fr
- es
- it
- nl
---
## EULPR: European License Plate Recognition
EULPR is a computer-vision model architecture purpose-built for detecting, reading, and recognizing European license plates. It is optimized for speed and accuracy across diverse EU plate formats.
### Model Performance
- **Detection Rate**: 100.0%
- **Text Extraction Rate**: 100.0%
- **Processing Speed**: 7.6 FPS
- **Model Size**: YOLOv12 Nano (~10.5MB)
### Supported Languages
- English (en)
- German (de)
- French (fr)
- Spanish (es)
- Italian (it)
- Dutch (nl)
### Quick Start
#### Installation
```python
pip install ultralytics easyocr opencv-python pillow torch torchvision huggingface_hub
```
#### Usage
```python
import cv2
import numpy as np
from ultralytics import YOLO
import easyocr
from PIL import Image
from huggingface_hub import hf_hub_download
import warnings
# Suppress warnings
warnings.filterwarnings('ignore')
# Download models from HuggingFace
print("Downloading model from HuggingFace...")
model_path = hf_hub_download(repo_id="0xnu/european-license-plate-recognition", filename="model.onnx")
config_path = hf_hub_download(repo_id="0xnu/european-license-plate-recognition", filename="config.json")
# Load models with explicit task specification
yolo_model = YOLO(model_path, task='detect')
ocr_reader = easyocr.Reader(['en', 'de', 'fr', 'es', 'it', 'nl'], gpu=False, verbose=False)
# Process image
def recognize_license_plate(image_path):
# Load image
image = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Detect license plates
results = yolo_model(image_rgb, conf=0.5, verbose=False)
plates = []
for result in results:
boxes = result.boxes
if boxes is not None:
for box in boxes:
# Get coordinates
x1, y1, x2, y2 = box.xyxy[0].cpu().numpy()
# Crop plate
plate_crop = image_rgb[int(y1):int(y2), int(x1):int(x2)]
# Extract text
ocr_results = ocr_reader.readtext(plate_crop)
if ocr_results:
text = ocr_results[0][1]
confidence = float(ocr_results[0][2]) # Convert to native Python float
plates.append({'text': text, 'confidence': confidence})
return plates
# Usage Example
results = recognize_license_plate('sample_car_with_license.jpeg')
print(results)
```
### Model Architecture
#### Detection Model (YOLOv12n)
- **Architecture**: YOLOv12 Nano
- **Parameters**: ~3M
- **Input Size**: 640x640 pixels
- **Output**: Bounding boxes for license plates
#### OCR Model (EasyOCR)
- **Engine**: Deep learning-based OCR
- **Languages**: Multi-European language support
- **Character Set**: Alphanumeric + common symbols
### Training Details
- **Dataset**: European License Plate Dataset ([0xnu/european-licence-plate](https://huggingface.co/datasets/0xnu/european-licence-plate))
- **Training Epochs**: 30
- **Batch Size**: 16
- **Image Size**: 640x640
- **Optimizer**: AdamW
- **Framework**: Ultralytics YOLOv12
### Use Cases
- Traffic monitoring systems
- Automated parking management
- Law enforcement applications
- Toll collection systems
- Vehicle access control
### Limitations
- Optimized for European license plate formats
- Performance may vary with extreme weather conditions
- Requires good image quality for optimal text recognition
- Real-time performance depends on hardware capabilities
### License
This project is licensed under the [Modified MIT License](./LICENSE).
### Citation
If you use this model in your research or product, please cite:
```bibtex
@misc{eulpr2025,
title={EULPR: European License Plate Recognition},
author={Finbarrs Oketunji},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/0xnu/european-license-plate-recognition}}
}
```
### Copyright
Copyright (C) 2025 Finbarrs Oketunji. All Rights Reserved.
|
carmaxsh/2399_Shredded_Cheese_-_Portion__2_oz_labled.json
|
carmaxsh
| 2025-09-08T18:22:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"convnext",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-09-08T18:22:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1757353628
|
NahedDom
| 2025-09-08T18:22:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:22:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
brauerraglmb/blockassist-bc-tough_subtle_tortoise_1757355698
|
brauerraglmb
| 2025-09-08T18:21:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tough subtle tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:21:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tough subtle tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lucf2/Florence-2-TableRecognition-SantaMariaDelFiore-FullData_V4
|
Lucf2
| 2025-09-08T18:21:40Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"florence2",
"image-text-to-text",
"generated_from_trainer",
"custom_code",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-08T16:11:40Z |
---
library_name: transformers
tags:
- image-text-to-text
- generated_from_trainer
model-index:
- name: Florence-2-TableRecognition-SantaMariaDelFiore-FullData_V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Florence-2-TableRecognition-SantaMariaDelFiore-FullData_V4
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1251 | 1.0 | 114 | 2.7425 |
| 2.4319 | 2.0 | 228 | 2.6248 |
| 2.3052 | 3.0 | 342 | 2.5871 |
| 2.2445 | 4.0 | 456 | 2.5530 |
| 2.1945 | 5.0 | 570 | 2.5257 |
| 2.1529 | 6.0 | 684 | 2.5203 |
| 2.1284 | 7.0 | 798 | 2.5094 |
| 2.105 | 8.0 | 912 | 2.5012 |
| 2.0899 | 9.0 | 1026 | 2.4991 |
| 2.0727 | 10.0 | 1140 | 2.4987 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.4
|
taniyatoha637/blockassist-bc-eager_flapping_anaconda_1757355670
|
taniyatoha637
| 2025-09-08T18:21:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"eager flapping anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:21:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- eager flapping anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757353837
|
sampingkaca72
| 2025-09-08T18:20:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:20:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zeldepaulojelks/blockassist-bc-slithering_quiet_vulture_1757355612
|
zeldepaulojelks
| 2025-09-08T18:20:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slithering quiet vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:20:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slithering quiet vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ferric-gravity/Model_1_v2
|
ferric-gravity
| 2025-09-08T18:20:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-08T18:10:29Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Model_1_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_1_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0936
- Accuracy: 0.9753
- F1: 0.9753
- Roc Auc: 0.9753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|
| No log | 1.0 | 64 | 0.1913 | 0.9348 | 0.9346 | 0.9333 |
| 0.3434 | 2.0 | 128 | 0.1091 | 0.9674 | 0.9674 | 0.9673 |
| 0.1176 | 3.0 | 192 | 0.0972 | 0.9733 | 0.9733 | 0.9733 |
| 0.07 | 4.0 | 256 | 0.0959 | 0.9733 | 0.9733 | 0.9732 |
| 0.0491 | 5.0 | 320 | 0.0936 | 0.9753 | 0.9753 | 0.9753 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
haihp02/085b95c6-0ec5-45f9-98d5-487e3e33f031
|
haihp02
| 2025-09-08T18:20:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T18:19:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayda138000/controlnet_persian_text_v1
|
ayda138000
| 2025-09-08T18:19:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-05T09:40:52Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-ayda138000/controlnet_persian_text_v1
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: یک لوگوی مدرن برای یک شرکت فناوری پیشرفته

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
ehlkehulda/blockassist-bc-camouflaged_fierce_beaver_1757355531
|
ehlkehulda
| 2025-09-08T18:19:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged fierce beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:19:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged fierce beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zzzzit/Qwen3-1.7B-baseline-4
|
zzzzit
| 2025-09-08T18:18:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T03:08:14Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen3-1.7B-baseline-4
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen3-1.7B-baseline-4
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zzzzit/Qwen3-1.7B-baseline-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
camakloree/blockassist-bc-pouncing_howling_gorilla_1757355485
|
camakloree
| 2025-09-08T18:18:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing howling gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:18:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing howling gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1757355426
|
Rudra-madlads
| 2025-09-08T18:18:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:17:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/llamargy-1B-Instruct-GGUF
|
mradermacher
| 2025-09-08T18:18:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nskwal/llamargy-1B-Instruct",
"base_model:quantized:nskwal/llamargy-1B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-08T17:59:41Z |
---
base_model: nskwal/llamargy-1B-Instruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/nskwal/llamargy-1B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llamargy-1B-Instruct-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llamargy-1B-Instruct-GGUF/resolve/main/llamargy-1B-Instruct.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
samiya-hijab-Viral-Video-Original-Clips/FULL.VIDEO.LINK.samiya.hijab.Viral.Video.Leaks.Official
|
samiya-hijab-Viral-Video-Original-Clips
| 2025-09-08T18:17:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-08T18:17:34Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
virginiammccauley4/blockassist-bc-grunting_squeaky_lynx_1757355458
|
virginiammccauley4
| 2025-09-08T18:17:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grunting squeaky lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:17:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grunting squeaky lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
siouxluriekaile/blockassist-bc-deadly_peckish_hare_1757355402
|
siouxluriekaile
| 2025-09-08T18:16:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly peckish hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:16:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly peckish hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757355364
|
bah63843
| 2025-09-08T18:16:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:16:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jazmynikrr/blockassist-bc-dormant_hulking_eagle_1757355375
|
jazmynikrr
| 2025-09-08T18:16:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant hulking eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:16:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant hulking eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757355244
|
vendi11
| 2025-09-08T18:14:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:14:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ockermahergatiseko/blockassist-bc-keen_winged_turtle_1757355250
|
ockermahergatiseko
| 2025-09-08T18:14:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen winged turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:14:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen winged turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Anzhc/VAE-benches
|
Anzhc
| 2025-09-08T18:13:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-08T17:22:12Z |
Bench on small set of anime illustrations:
| VAE 16ch | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS-SSIM ↑ | KL ↓ | CONSISTENCY ↓ | rFID ↓ |
|---|---|---|---|---|---|---|---|---|
| FLUX VAE | 3.0600 | 4.7752 | 35.4400 | <span style="color:Crimson">0.0112</span> | 0.9905 | 12.4717 | <span style="color:Orange">0.0079</span> | <span style="color:Crimson">0.669906</span> |
| MS-LC-EQ-D-VR VAE FLUX | 2.933 | 4.856 | 35.251 | 0.018 | 0.990 | <span style="color:Orange">11.225</span> | — | 1.561 |
| Flux EQ v2 B1 | <span style="color:Crimson">2.4825</span> | <span style="color:Crimson">4.2776</span> | <span style="color:Crimson">36.6027</span> | <span style="color:Orange">0.0132</span> | <span style="color:Crimson">0.9916</span> | 11.6388 | <span style="color:Crimson">0.0039</span> | <span style="color:Orange">0.744904</span> |
| Disty SD3 Anime ft | 2.6486 | 4.4930 | 36.2098 | 0.0418 | 0.9897 | <span style="color:Crimson">8.9334</span> | <span style="color:Crimson">0.0039</span> | 0.817663 |
| Flux Chat Error | <span style="color:Orange">2.6131</span> | <span style="color:Orange">4.3238</span> | <span style="color:Orange">36.3946</span> | 0.0173 | <span style="color:Orange">0.9912</span> | 12.4778 | <span style="color:Orange">0.0057</span> | 0.768330 |
|
zarozinskiallen/blockassist-bc-amphibious_quiet_camel_1757355163
|
zarozinskiallen
| 2025-09-08T18:12:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious quiet camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:12:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious quiet camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prolinkmoon/blockassist-bc-rabid_scaly_anteater_1757355008
|
prolinkmoon
| 2025-09-08T18:12:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid scaly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:11:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid scaly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ehsanl/me5-base-trimmed-old-syn-filt_2ng_lwu
|
Ehsanl
| 2025-09-08T18:11:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:nicolaebanari/me5-base-trimmed-nl-test",
"base_model:finetune:nicolaebanari/me5-base-trimmed-nl-test",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-08T17:19:15Z |
---
library_name: transformers
base_model: nicolaebanari/me5-base-trimmed-nl-test
tags:
- generated_from_trainer
model-index:
- name: me5-base-trimmed-old-syn-filt_2ng_lwu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# me5-base-trimmed-old-syn-filt_2ng_lwu
This model is a fine-tuned version of [nicolaebanari/me5-base-trimmed-nl-test](https://huggingface.co/nicolaebanari/me5-base-trimmed-nl-test) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.8
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.53.0
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
mistie4525/blockassist-bc-hairy_sprightly_puffin_1757355070
|
mistie4525
| 2025-09-08T18:11:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy sprightly puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:11:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy sprightly puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EleutherAI/early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5-wmdp-papers
|
EleutherAI
| 2025-09-08T18:09:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T17:35:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sensmeierbrenton/blockassist-bc-silky_solitary_boar_1757354868
|
sensmeierbrenton
| 2025-09-08T18:08:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky solitary boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:07:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky solitary boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
clairreginald/blockassist-bc-lethal_wary_shark_1757354837
|
clairreginald
| 2025-09-08T18:07:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal wary shark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:07:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal wary shark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757354771
|
vendi11
| 2025-09-08T18:06:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:06:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeresftarke/blockassist-bc-flapping_beaked_owl_1757354788
|
jeresftarke
| 2025-09-08T18:06:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping beaked owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:06:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping beaked owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boomeryop/blockassist-bc-noisy_keen_heron_1757354739
|
boomeryop
| 2025-09-08T18:06:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy keen heron",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:05:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy keen heron
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757354695
|
bah63843
| 2025-09-08T18:05:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:05:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF
|
mradermacher
| 2025-09-08T18:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"ja",
"dataset:tokyotech-llm/swallow-code",
"dataset:tokyotech-llm/swallow-math",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-v0.5",
"base_model:quantized:tokyotech-llm/Llama-3.1-Swallow-8B-v0.5",
"license:llama3.3",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T13:00:37Z |
---
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-v0.5
datasets:
- tokyotech-llm/swallow-code
- tokyotech-llm/swallow-math
language:
- en
- ja
library_name: transformers
license:
- llama3.3
- gemma
model_type: llama
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.5
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-Swallow-8B-v0.5-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-8B-v0.5-GGUF/resolve/main/Llama-3.1-Swallow-8B-v0.5.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zcopwerq/blockassist-bc-lumbering_tropical_aardvark_1757354715
|
zcopwerq
| 2025-09-08T18:05:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering tropical aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:05:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering tropical aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1757353173
|
capungmerah627
| 2025-09-08T18:05:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:05:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/huihui-ai.Huihui-Hunyuan-MT-7B-abliterated-GGUF
|
DevQuasar
| 2025-09-08T18:04:45Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:huihui-ai/Huihui-Hunyuan-MT-7B-abliterated",
"base_model:quantized:huihui-ai/Huihui-Hunyuan-MT-7B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-08T17:20:02Z |
---
base_model:
- huihui-ai/Huihui-Hunyuan-MT-7B-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/Huihui-Hunyuan-MT-7B-abliterated](https://huggingface.co/huihui-ai/Huihui-Hunyuan-MT-7B-abliterated)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mradermacher/DPE-70b-Ckpts-GGUF
|
mradermacher
| 2025-09-08T18:04:40Z | 111 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:merged.json",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:Delta-Vector/Orion-LN-V1-ShareGPT",
"dataset:Delta-Vector/Orion-Alpindale-LN-ShareGPT",
"dataset:Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT",
"dataset:Delta-Vector/Orion-RP-Guild",
"dataset:Delta-Vector/Orion-OpenCAI-ShareGPT",
"dataset:Delta-Vector/Orion-LIMARP-Complexity",
"base_model:NewEden/DPE-70b-Ckpts",
"base_model:quantized:NewEden/DPE-70b-Ckpts",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T02:45:06Z |
---
base_model: NewEden/DPE-70b-Ckpts
datasets:
- merged.json
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- Delta-Vector/Orion-LN-V1-ShareGPT
- Delta-Vector/Orion-Alpindale-LN-ShareGPT
- Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT
- Delta-Vector/Orion-RP-Guild
- Delta-Vector/Orion-OpenCAI-ShareGPT
- Delta-Vector/Orion-LIMARP-Complexity
language:
- en
library_name: transformers
license: llama3.1
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/NewEden/DPE-70b-Ckpts
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DPE-70b-Ckpts-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DPE-70b-Ckpts-GGUF/resolve/main/DPE-70b-Ckpts.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
boomeryop/blockassist-bc-prowling_rugged_capybara_1757354613
|
boomeryop
| 2025-09-08T18:04:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling rugged capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:03:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling rugged capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lista-ia/despir-ia
|
lista-ia
| 2025-09-08T18:03:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-08T17:58:31Z |
# Despir IA – Melhor aplicativo de despir fotos ia em 2025 sem cadastro {p9d2m} (Atualizado: 09 de setembro de 2025)
(Última atualização: 09 de setembro de 2025)
## Despir IA – A ferramenta mais avançada para tirar roupas com inteligência artificial (2025)
- **Remoção de roupas por despir ia** – gera pele hiper-realista, sombras e texturas.
- **Modos de teste e troca** – pré-visualização de biquíni/lingerie, efeitos transparentes e simulação de roupa-para-nude.
- **Ferramentas inteligentes de edição** – corrigir marcas de bronzeado, suavizar pele, ajustar iluminação/reflexos e ampliar detalhes.
- **Processamento em lote ou imagem única** – rapidez e consistência nos resultados.
🔒 Projetado com privacidade em mente • Apenas 18+ • Use apenas em fotos próprias ou com consentimento expresso.
<style>
.button_despir_ia {
display: inline-flex !important;
align-items: center !important;
gap: .5rem !important;
text-decoration: none !important;
background: linear-gradient(135deg,#ff3b3b,#ff7a00 55%,#ffd400) !important;
color: #ffffff !important;
border: 0 !important;
border-radius: 999px !important;
font: 600 16px/1.2 system-ui,-apple-system,Segoe UI,Roboto,Helvetica,Arial,sans-serif !important;
padding: 14px 26px !important;
box-shadow: 0 8px 18px rgba(255,69,0,.35), inset 0 -2px 0 rgba(0,0,0,.15) !important;
transition: transform .15s ease, box-shadow .15s ease, filter .15s ease !important;
}
.button_despir_ia:hover {
transform: translateY(-1px) !important;
box-shadow: 0 12px 24px rgba(255,69,0,.45), inset 0 -2px 0 rgba(0,0,0,.15) !important;
text-decoration: none !important;
}
.button_despir_ia:focus {
outline: 3px solid #ffd400 !important;
outline-offset: 2px !important;
}
.button_despir_ia-emoji {
font-size: 20px !important;
}
</style>
<a href="https://aiweely.com/update/app"
class="button_despir_ia"
target="_blank"
rel="noopener"
aria-label="Experimente agora o melhor despir ia gratuito">
<span class="button_despir_ia-emoji">👉</span>
<span>Experimente agora o melhor <strong>despir ia gratuito</strong></span>
</a>

---
## O que é Despir IA? (Atualizado: 09 de setembro de 2025)
O **despir ia**, também chamado de **IA nua**, **indress ia** ou **ia tirar roupa**, refere-se a uma classe de aplicações de inteligência artificial criadas para alterar imagens e remover ou simular a remoção de roupas.
Essas ferramentas utilizam algoritmos sofisticados para manipular conteúdo visual e gerar imagens que mostram uma **mulher nua ia** ou pessoas sem vestimenta.
---
## Tecnologias-chave por trás do despir ia
- **Redes Adversárias Generativas (GANs)**: dois modelos neurais competem para criar imagens cada vez mais realistas.
- **Modelos de Difusão**: refinam imagens em várias etapas, aumentando a qualidade e os detalhes.
Graças a essas tecnologias, o **despir fotos ia** produz alterações altamente realistas, muitas vezes difíceis de distinguir de fotos genuínas.
---
## História e evolução (Atualizado: 09 de setembro de 2025)
O conceito de **ia pelada** tem origem em experimentos iniciais de manipulação de imagens e na tecnologia de deepfake.
Um marco importante foi o lançamento do **DeepNude** em 2019, aplicativo que ganhou notoriedade por conseguir **despir fotos ia** de mulheres de forma convincente. Apesar de retirado do ar por questões éticas, ele pavimentou o caminho para inovações futuras em **despir ia**.
De 2020 a 2025, a tecnologia avançou rapidamente:
- **2020–2021**: primeiras ferramentas de deepfake com função de ia tirar roupa, mas com baixa precisão.
- **2022–2023**: melhorias significativas nos algoritmos aumentaram o realismo e a consistência.
- **2024–2025**: interfaces amigáveis e ampla distribuição online facilitaram o acesso ao **despir ia gratuito**.
---
## Funcionalidades do despir ia
- **Remoção automática de roupas**
- **Simulação de prova de roupas e transparência**
- **Realismo nas texturas de pele**
- **Ajustes de iluminação e sombras**
- **Suporte para múltiplas imagens ou fotos únicas**
---
## Popularidade do despir ia (Atualizado: 09 de setembro de 2025)
O crescimento do **despir fotos ia** pode ser atribuído a diversos fatores:
- **Disseminação viral em redes sociais**
- **Cobertura da mídia** sobre avanços e controvérsias
- **Comunidades online** debatendo, testando e compartilhando resultados
---
## Como funciona o despir ia (Atualizado: 09 de setembro de 2025)
1. **Upload**: o usuário envia uma foto ao aplicativo.
2. **Pré-processamento**: a **indress ia** analisa corpo, roupas e iluminação.
3. **Remoção de roupas**: com GANs ou modelos de difusão, gera-se uma versão nua da imagem.
4. **Pós-processamento**: texturas da pele e luz são refinadas.
5. **Saída**: a foto modificada é entregue ao usuário.
⚠️ Importante: o **despir ia gratuito** não “vê através” da roupa; ele cria simulações plausíveis baseadas em padrões aprendidos.
---
## Capacidades e limitações (2025)
**Capacidades:**
- Processamento em alta resolução
- Geração de texturas realistas
- Customização do nível de remoção de roupas
**Limitações:**
- Resultados variam de acordo com qualidade da foto
- Erros comuns: borrões, sombras inconsistentes, artefatos
- Proporções corporais imprecisas em alguns casos
---
## Exemplos de ferramentas despir ia (2025)
| Nome | Tipo | Promessa | Realidade | Status legal |
|-----------------|-----------------|-----------------------------------|-------------------------------------|----------------------|
| NudeAI Pro | Plataforma web | Remoção de roupas perfeita | Produz erros e artefatos | Banido em várias regiões |
| SeeThroughX | App mobile | Despir em tempo real | Pouca precisão, alto consumo de bateria | Sob investigação |
| AIClothRemover | Software desktop | Fotos em alta resolução | Exige hardware potente | Legal com restrições |
| VirtualTryOn AI | Ferramenta AR | Prova de roupas virtuais | Funciona para moda, não para nude | Amplamente aceito |
| SafeNudeArt | Focado em arte | Conteúdo NSFW com consentimento | Ético e seguro | Regulamentado e legal|
---
## Riscos de usar despir ia
- **Infecção por malware** em apps falsos
- **Extorsão** com imagens manipuladas
- **Roubo de dados** de fotos pessoais enviadas
---
## Perigos éticos e legais (Atualizado: 09 de setembro de 2025)
**Implicações legais:**
- Leis diferentes em cada país
- Alterar imagens sem permissão = grave violação de privacidade
- Uso para difamação ou assédio pode gerar processos
**Preocupações éticas:**
- Violação da privacidade pessoal
- Potencial de abuso e desequilíbrio de poder
- Impacto social ao normalizar o uso de **ia pelada**
**Usos legítimos:**
- Prova de roupas em lojas online
- Educação médica (simulações anatômicas)
- Criação artística e avatares digitais
⚠️ **Usos abusivos:** revenge porn, assédio direcionado, roubo de identidade.
---
## Como detectar imagens feitas com despir ia
- Identificar artefatos e sombras estranhas
- Analisar metadados das fotos
- Usar busca reversa de imagens
- Ferramentas de detecção forense baseadas em IA
---
## Alternativas ao despir fotos ia (Atualizado: 09 de setembro de 2025)
- **Plataformas de prova virtual de moda**
- **Simuladores 3D médicos** para ensino
- **Ferramentas NSFW consentidas**, respeitando privacidade
---
## FAQ (Atualizado: 09 de setembro de 2025)
**O despir ia é real?**
Sim, trata-se de ferramentas de IA que removem roupas digitalmente.
**O despir ia consegue realmente ver através das roupas?**
Não – ele apenas gera representações plausíveis.
**É legal usar IA nua?**
Na maioria dos países, só com consentimento. Caso contrário, é ilegal.
**Quais os riscos do despir fotos ia?**
Privacidade violada, riscos jurídicos, golpes e malwares.
**Posso usar ia tirar roupa em fotos de celebridades?**
Não. Além de antiético, provavelmente é ilegal.
**Existem aplicativos móveis de despir ia gratuito?**
Sim, mas muitos são fraudulentos ou inseguros.
**Quais países proíbem indress ia?**
Diversos já baniram o uso não consentido.
**Quais alternativas seguras existem?**
Prova de roupas em e-commerce, simuladores médicos 3D e arte NSFW consentida.
---
## Conclusão e lembrete ético (Atualizado: 09 de setembro de 2025)
O **despir ia** representa um avanço impressionante da inteligência artificial, mas também carrega riscos éticos e legais.
Embora haja aplicações legítimas em moda, medicina e arte, o uso indevido pode causar graves danos à privacidade e reputação.
👉 **Sempre peça consentimento, respeite a privacidade e siga as leis ao usar uma ia tirar roupa.**
|
fyjsj6669/blockassist-bc-wary_hibernating_anaconda_1757354515
|
fyjsj6669
| 2025-09-08T18:02:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wary hibernating anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:02:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wary hibernating anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757354428
|
canoplos112
| 2025-09-08T18:02:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:01:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andidedjag513/blockassist-bc-monstrous_subtle_kingfisher_1757354463
|
andidedjag513
| 2025-09-08T18:01:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous subtle kingfisher",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:01:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous subtle kingfisher
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neylanduoh/blockassist-bc-prehistoric_iridescent_puffin_1757354434
|
neylanduoh
| 2025-09-08T18:00:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric iridescent puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:00:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric iridescent puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757354384
|
bah63843
| 2025-09-08T18:00:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T18:00:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slxhere/modern_ancientpoem_encoder
|
slxhere
| 2025-09-08T18:00:30Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:225000",
"loss:MultipleNegativesRankingLoss",
"zh",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:richinfoai/ritrieve_zh_v1",
"base_model:finetune:richinfoai/ritrieve_zh_v1",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-05-19T14:37:02Z |
---
language:
- zh
license: mit
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:225000
- loss:MultipleNegativesRankingLoss
base_model: richinfoai/ritrieve_zh_v1
widget:
- source_sentence: 下班后和同事直奔常去的那家火锅店,热热闹闹地涮了一晚上。
sentences:
- 联延掩四远,赫弈成洪炉。
- 把酒仰问天,古今谁不死。
- 骑出平阳里,筵开卫尉家。
- source_sentence: 站在山顶看日出时,突然觉得世俗烦恼都不重要了。
sentences:
- 郁没二悲魂,萧条犹在否。
- 封疆亲日月,邑里出王公。
- 心朝玉皇帝,貌似紫阳人。
- source_sentence: 隔壁老张家两个儿子都被征走了,现在天天以泪洗面。
sentences:
- 若教为女嫁东风,除却黄莺难匹配。
- 山东今岁点行频,几处冤魂哭虏尘。
- 远图尝画地,超拜乃登坛。
- source_sentence: 边境小镇常年没人驻守,只有老李一个人在山脚下种地。
sentences:
- 海徼长无戍,湘山独种畬。
- 高名宋玉遗闲丽,作赋兰成绝盛才。
- 九衢南面色,苍翠绝纤尘。
- source_sentence: 微信列表翻到底,能说真心话的居然只剩快递群。
sentences:
- 黛消波月空蟾影,歌息梁尘有梵声。
- 代情难重论,人事好乖移。
- 时应记得长安事,曾向文场属思劳。
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# RETRIEVE ZH 微调:古诗 ↔ 现代语
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [richinfoai/ritrieve_zh_v1](https://huggingface.co/richinfoai/ritrieve_zh_v1) on the json dataset. It maps sentences & paragraphs to a 1792-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [richinfoai/ritrieve_zh_v1](https://huggingface.co/richinfoai/ritrieve_zh_v1) <!-- at revision f8d5a707656c55705027678e311f9202c8ced12c -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1792 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** zh
- **License:** mit
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 1792, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'微信列表翻到底,能说真心话的居然只剩快递群。',
'代情难重论,人事好乖移。',
'时应记得长安事,曾向文场属思劳。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1792]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 225,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.51 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 15.23 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 15.34 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------|:------------------------------|:------------------------------|
| <code>整个人蜷在阳光里,连毛衣都晒出一股蓬松的香味。</code> | <code>箕踞拥裘坐,半身在日旸。</code> | <code>洛阳女儿对门居,才可容颜十五馀。</code> |
| <code>好像所有的好事都约好了一样,今天一起找上门来。</code> | <code>临终极乐宝华迎,观音势至俱来至。</code> | <code>身没南朝宅已荒,邑人犹赏旧风光。</code> |
| <code>大家都觉得她太娇气,只有你一直小心照顾着她。</code> | <code>弱质人皆弃,唯君手自栽。</code> | <code>秦筑长城城已摧,汉武北上单于台。</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 25,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 26.86 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 15.31 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 15.3 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------|:--------------------------|:------------------------------|
| <code>看着街边那些孤零零的老人,真怕自己以后也变成那样。</code> | <code>垂白乱南翁,委身希北叟。</code> | <code>熏香荀令偏怜少,傅粉何郎不解愁。</code> |
| <code>关了灯,屋里黑漆漆的,就听见外面秋虫和落叶在说话。</code> | <code>秋虫与秋叶,一夜隔窗闻。</code> | <code>未能穷意义,岂敢求瑕痕。</code> |
| <code>虽然爷爷不在了,但他教我做人的道理永远记在心里。</code> | <code>惟孝虽遥,灵规不朽。</code> | <code>巧类鸳机织,光攒麝月团。</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 6
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 6
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0284 | 50 | 4.4241 | - |
| 0.0569 | 100 | 3.4415 | - |
| 0.0853 | 150 | 2.6725 | - |
| 0.1138 | 200 | 2.4137 | 2.2686 |
| 0.1422 | 250 | 2.2701 | - |
| 0.1706 | 300 | 2.1523 | - |
| 0.1991 | 350 | 2.0805 | - |
| 0.2275 | 400 | 2.0513 | 1.9506 |
| 0.2560 | 450 | 2.0048 | - |
| 0.2844 | 500 | 1.9552 | - |
| 0.3129 | 550 | 1.8778 | - |
| 0.3413 | 600 | 1.8549 | 1.7630 |
| 0.3697 | 650 | 1.822 | - |
| 0.3982 | 700 | 1.8128 | - |
| 0.4266 | 750 | 1.7742 | - |
| 0.4551 | 800 | 1.7076 | 1.6331 |
| 0.4835 | 850 | 1.6919 | - |
| 0.5119 | 900 | 1.64 | - |
| 0.5404 | 950 | 1.6291 | - |
| 0.5688 | 1000 | 1.5881 | 1.5368 |
| 0.5973 | 1050 | 1.6018 | - |
| 0.6257 | 1100 | 1.5664 | - |
| 0.6542 | 1150 | 1.5545 | - |
| 0.6826 | 1200 | 1.5292 | 1.4532 |
| 0.7110 | 1250 | 1.5166 | - |
| 0.7395 | 1300 | 1.517 | - |
| 0.7679 | 1350 | 1.4639 | - |
| 0.7964 | 1400 | 1.4729 | 1.3687 |
| 0.8248 | 1450 | 1.4501 | - |
| 0.8532 | 1500 | 1.3932 | - |
| 0.8817 | 1550 | 1.4063 | - |
| 0.9101 | 1600 | 1.3825 | 1.3003 |
| 0.9386 | 1650 | 1.3647 | - |
| 0.9670 | 1700 | 1.3431 | - |
| 0.9954 | 1750 | 1.3417 | - |
| 1.0239 | 1800 | 1.0839 | 1.2431 |
| 1.0523 | 1850 | 1.0801 | - |
| 1.0808 | 1900 | 1.0577 | - |
| 1.1092 | 1950 | 1.0159 | - |
| 1.1377 | 2000 | 1.0239 | 1.2132 |
| 1.1661 | 2050 | 1.0335 | - |
| 1.1945 | 2100 | 1.0117 | - |
| 1.2230 | 2150 | 1.0343 | - |
| 1.2514 | 2200 | 1.0193 | 1.1808 |
| 1.2799 | 2250 | 1.0235 | - |
| 1.3083 | 2300 | 0.9949 | - |
| 1.3367 | 2350 | 1.0058 | - |
| 1.3652 | 2400 | 1.0039 | 1.1428 |
| 1.3936 | 2450 | 1.0164 | - |
| 1.4221 | 2500 | 0.9934 | - |
| 1.4505 | 2550 | 0.9777 | - |
| 1.4790 | 2600 | 0.9753 | 1.1101 |
| 1.5074 | 2650 | 0.9621 | - |
| 1.5358 | 2700 | 0.9756 | - |
| 1.5643 | 2750 | 0.9725 | - |
| 1.5927 | 2800 | 0.9649 | 1.0813 |
| 1.6212 | 2850 | 0.9652 | - |
| 1.6496 | 2900 | 0.9861 | - |
| 1.6780 | 2950 | 0.916 | - |
| 1.7065 | 3000 | 0.9417 | 1.0523 |
| 1.7349 | 3050 | 0.9599 | - |
| 1.7634 | 3100 | 0.9275 | - |
| 1.7918 | 3150 | 0.9247 | - |
| 1.8203 | 3200 | 0.9417 | 1.0306 |
| 1.8487 | 3250 | 0.9275 | - |
| 1.8771 | 3300 | 0.9431 | - |
| 1.9056 | 3350 | 0.9147 | - |
| 1.9340 | 3400 | 0.8957 | 1.0051 |
| 1.9625 | 3450 | 0.9169 | - |
| 1.9909 | 3500 | 0.9079 | - |
| 2.0193 | 3550 | 0.7057 | - |
| 2.0478 | 3600 | 0.6037 | 0.9944 |
| 2.0762 | 3650 | 0.5888 | - |
| 2.1047 | 3700 | 0.6134 | - |
| 2.1331 | 3750 | 0.6209 | - |
| 2.1615 | 3800 | 0.6163 | 0.9836 |
| 2.1900 | 3850 | 0.6271 | - |
| 2.2184 | 3900 | 0.629 | - |
| 2.2469 | 3950 | 0.6041 | - |
| 2.2753 | 4000 | 0.622 | 0.9792 |
| 2.3038 | 4050 | 0.6175 | - |
| 2.3322 | 4100 | 0.627 | - |
| 2.3606 | 4150 | 0.6339 | - |
| 2.3891 | 4200 | 0.6325 | 0.9643 |
| 2.4175 | 4250 | 0.6044 | - |
| 2.4460 | 4300 | 0.6124 | - |
| 2.4744 | 4350 | 0.6326 | - |
| 2.5028 | 4400 | 0.6349 | 0.9462 |
| 2.5313 | 4450 | 0.6286 | - |
| 2.5597 | 4500 | 0.6325 | - |
| 2.5882 | 4550 | 0.6399 | - |
| 2.6166 | 4600 | 0.6184 | 0.9317 |
| 2.6451 | 4650 | 0.6292 | - |
| 2.6735 | 4700 | 0.6017 | - |
| 2.7019 | 4750 | 0.6305 | - |
| 2.7304 | 4800 | 0.6152 | 0.9213 |
| 2.7588 | 4850 | 0.5972 | - |
| 2.7873 | 4900 | 0.6048 | - |
| 2.8157 | 4950 | 0.6096 | - |
| 2.8441 | 5000 | 0.6156 | 0.9073 |
| 2.8726 | 5050 | 0.5942 | - |
| 2.9010 | 5100 | 0.592 | - |
| 2.9295 | 5150 | 0.6088 | - |
| 2.9579 | 5200 | 0.5941 | 0.8950 |
| 2.9863 | 5250 | 0.6161 | - |
| 3.0148 | 5300 | 0.5021 | - |
| 3.0432 | 5350 | 0.4116 | - |
| 3.0717 | 5400 | 0.3936 | 0.9009 |
| 3.1001 | 5450 | 0.4193 | - |
| 3.1286 | 5500 | 0.422 | - |
| 3.1570 | 5550 | 0.432 | - |
| 3.1854 | 5600 | 0.4281 | 0.8985 |
| 3.2139 | 5650 | 0.4091 | - |
| 3.2423 | 5700 | 0.4305 | - |
| 3.2708 | 5750 | 0.4203 | - |
| 3.2992 | 5800 | 0.4193 | 0.8869 |
| 3.3276 | 5850 | 0.4238 | - |
| 3.3561 | 5900 | 0.4274 | - |
| 3.3845 | 5950 | 0.4124 | - |
| 3.4130 | 6000 | 0.4241 | 0.8842 |
| 3.4414 | 6050 | 0.427 | - |
| 3.4699 | 6100 | 0.4275 | - |
| 3.4983 | 6150 | 0.4152 | - |
| 3.5267 | 6200 | 0.4247 | 0.8733 |
| 3.5552 | 6250 | 0.4111 | - |
| 3.5836 | 6300 | 0.4396 | - |
| 3.6121 | 6350 | 0.4122 | - |
| 3.6405 | 6400 | 0.4252 | 0.8657 |
| 3.6689 | 6450 | 0.4167 | - |
| 3.6974 | 6500 | 0.4282 | - |
| 3.7258 | 6550 | 0.411 | - |
| 3.7543 | 6600 | 0.4273 | 0.8540 |
| 3.7827 | 6650 | 0.4327 | - |
| 3.8111 | 6700 | 0.431 | - |
| 3.8396 | 6750 | 0.4347 | - |
| 3.8680 | 6800 | 0.4264 | 0.8523 |
| 3.8965 | 6850 | 0.4213 | - |
| 3.9249 | 6900 | 0.4285 | - |
| 3.9534 | 6950 | 0.4138 | - |
| 3.9818 | 7000 | 0.4051 | 0.8407 |
| 4.0102 | 7050 | 0.3779 | - |
| 4.0387 | 7100 | 0.2957 | - |
| 4.0671 | 7150 | 0.2939 | - |
| 4.0956 | 7200 | 0.3065 | 0.8590 |
| 4.1240 | 7250 | 0.3081 | - |
| 4.1524 | 7300 | 0.3043 | - |
| 4.1809 | 7350 | 0.3176 | - |
| 4.2093 | 7400 | 0.3067 | 0.8487 |
| 4.2378 | 7450 | 0.299 | - |
| 4.2662 | 7500 | 0.3106 | - |
| 4.2947 | 7550 | 0.3062 | - |
| 4.3231 | 7600 | 0.3153 | 0.8498 |
| 4.3515 | 7650 | 0.3206 | - |
| 4.3800 | 7700 | 0.3202 | - |
| 4.4084 | 7750 | 0.3167 | - |
| 4.4369 | 7800 | 0.3044 | 0.8426 |
| 4.4653 | 7850 | 0.3015 | - |
| 4.4937 | 7900 | 0.3157 | - |
| 4.5222 | 7950 | 0.3109 | - |
| 4.5506 | 8000 | 0.3164 | 0.8385 |
| 4.5791 | 8050 | 0.2996 | - |
| 4.6075 | 8100 | 0.3247 | - |
| 4.6359 | 8150 | 0.3093 | - |
| 4.6644 | 8200 | 0.3017 | 0.8294 |
| 4.6928 | 8250 | 0.3075 | - |
| 4.7213 | 8300 | 0.3006 | - |
| 4.7497 | 8350 | 0.3134 | - |
| 4.7782 | 8400 | 0.3111 | 0.8249 |
| 4.8066 | 8450 | 0.3165 | - |
| 4.8350 | 8500 | 0.3071 | - |
| 4.8635 | 8550 | 0.3017 | - |
| 4.8919 | 8600 | 0.3092 | 0.8225 |
| 4.9204 | 8650 | 0.3 | - |
| 4.9488 | 8700 | 0.2999 | - |
| 4.9772 | 8750 | 0.3116 | - |
| 5.0057 | 8800 | 0.3046 | 0.8173 |
| 5.0341 | 8850 | 0.2501 | - |
| 5.0626 | 8900 | 0.2443 | - |
| 5.0910 | 8950 | 0.2338 | - |
| 5.1195 | 9000 | 0.2382 | 0.8248 |
| 5.1479 | 9050 | 0.2524 | - |
| 5.1763 | 9100 | 0.2427 | - |
| 5.2048 | 9150 | 0.2512 | - |
| 5.2332 | 9200 | 0.2377 | 0.8218 |
| 5.2617 | 9250 | 0.2458 | - |
| 5.2901 | 9300 | 0.2515 | - |
| 5.3185 | 9350 | 0.2453 | - |
| 5.3470 | 9400 | 0.244 | 0.8226 |
| 5.3754 | 9450 | 0.2389 | - |
| 5.4039 | 9500 | 0.253 | - |
| 5.4323 | 9550 | 0.2509 | - |
| 5.4608 | 9600 | 0.2492 | 0.8198 |
| 5.4892 | 9650 | 0.2379 | - |
| 5.5176 | 9700 | 0.247 | - |
| 5.5461 | 9750 | 0.2419 | - |
| 5.5745 | 9800 | 0.244 | 0.8150 |
| 5.6030 | 9850 | 0.2498 | - |
| 5.6314 | 9900 | 0.2381 | - |
| 5.6598 | 9950 | 0.2425 | - |
| 5.6883 | 10000 | 0.2451 | 0.8148 |
| 5.7167 | 10050 | 0.2468 | - |
| 5.7452 | 10100 | 0.2404 | - |
| 5.7736 | 10150 | 0.2397 | - |
| 5.8020 | 10200 | 0.2417 | 0.8124 |
| 5.8305 | 10250 | 0.2446 | - |
| 5.8589 | 10300 | 0.2443 | - |
| 5.8874 | 10350 | 0.2465 | - |
| 5.9158 | 10400 | 0.2472 | 0.8121 |
</details>
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
boomeryop/blockassist-bc-restless_colorful_otter_1757354381
|
boomeryop
| 2025-09-08T18:00:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless colorful otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-08T17:59:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless colorful otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.