modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-21 00:39:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 514
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-21 00:38:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Podkatik-v3-GGUF
|
mradermacher
| 2025-08-19T16:16:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:igorktech/Podkatik-v3",
"base_model:quantized:igorktech/Podkatik-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T16:02:39Z |
---
base_model: igorktech/Podkatik-v3
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/igorktech/Podkatik-v3
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Podkatik-v3-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Podkatik-v3-GGUF/resolve/main/Podkatik-v3.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
haji80mr-uoft/semi-wotype-Llama-tuned-Lora-only-V0
|
haji80mr-uoft
| 2025-08-19T16:16:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:16:08Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** haji80mr-uoft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chatpdflocal/gemma-3-12b-it-gguf
|
chatpdflocal
| 2025-08-19T16:16:15Z | 509 | 3 | null |
[
"gguf",
"legal",
"finance",
"PC",
"laptop",
"mobile",
"gemma",
"gemma 3",
"small size",
"chatpdf",
"local",
"macos",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-12T12:32:13Z |
---
license: apache-2.0
tags:
- legal
- finance
- PC
- laptop
- mobile
- gemma
- gemma 3
- small size
- chatpdf
- local
- macos
---
# It's a gguf model file of gemma-3-12b-it, which is developed by Google.
It's very applicable for deploying and using in PCs, laptops or mobiles.
gemma-3-12b-it-q4_0.gguf is the quantization-aware trained(QAT) checkpoints of Gemma 3, 3x less VRAM, while retaining almost the same quality. Recommend it.
# If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:
- If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local gemma-3-12b-it.
- you can download ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
|
modelcitizens/GEMMACITIZEN-12B
|
modelcitizens
| 2025-08-19T16:15:52Z | 0 | 0 | null |
[
"safetensors",
"dataset:modelcitizens/modelcitizens",
"arxiv:2507.05455",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"region:us"
] | null | 2025-07-08T22:08:10Z |
---
datasets:
- modelcitizens/modelcitizens
base_model:
- google/gemma-3-12b-it
---
## Model Summary
GEMMACITIZEN-12B is a toxicity detection model finetuned from Gemma-3-12B-IT on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.
- **Repository:** asuvarna31/modelcitizens
- **Paper:** https://arxiv.org/abs/2507.05455
## Usage
```
PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.
CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""
```
## Citation
```
@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
title={ModelCitizens:Representing Community Voices in Online Safety},
author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
year={2025},
eprint={2507.05455},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.05455},
}
```
|
lakelee/RLB_MLP_BC_v4.20250820.00
|
lakelee
| 2025-08-19T16:15:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mlp_swiglu",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:07:37Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: RLB_MLP_BC_v4.20250820.00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v4.20250820.00
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Tokenizers 0.21.4
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755618495
|
pempekmangedd
| 2025-08-19T16:14:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:14:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chansung/Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
|
chansung
| 2025-08-19T16:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:59:03Z |
---
base_model: google/gemma-2-2b-it
datasets: chansung/verifiable-coding-problems-python-v2
library_name: transformers
model_name: Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chansung/Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/6a4vn02u)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Krish356/lora_model
|
Krish356
| 2025-08-19T16:14:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:13:27Z |
---
base_model: unsloth/qwen3-coder-30b-a3b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Krish356
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-coder-30b-a3b-instruct
This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rambetiko/blockassist-bc-soft_lanky_marmot_1755619656
|
rambetiko
| 2025-08-19T16:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:13:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755618350
|
quantumxnode
| 2025-08-19T16:13:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:13:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
modelcitizens/LLAMACITIZEN-8B
|
modelcitizens
| 2025-08-19T16:13:38Z | 0 | 0 | null |
[
"safetensors",
"dataset:modelcitizens/modelcitizens",
"arxiv:2507.05455",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-07-08T22:07:48Z |
---
datasets:
- modelcitizens/modelcitizens
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
## Model Summary
LLAMACITIZEN-8B is a toxicity detection model finetuned from LLaMA-3.1-8B-Instruct on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.
- **Repository:** asuvarna31/modelcitizens
- **Paper:** https://arxiv.org/abs/2507.05455
## Usage
```
PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.
CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""
```
## Citation
```
@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
title={ModelCitizens:Representing Community Voices in Online Safety},
author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
year={2025},
eprint={2507.05455},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.05455},
}
```
|
Mostefa-Terbeche/diabetic-retinopathy-aptos-resnet50-advanced-20250618-162329
|
Mostefa-Terbeche
| 2025-08-19T16:13:34Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:aptos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T15:23:50Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- aptos
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: aptos_resnet50_advanced
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: aptos
name: APTOS
metrics:
- type: accuracy
value: 0.7759562841530054
- type: quadratic-kappa
value: 0.8835158192633705
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the resnet50 architecture on the aptos dataset with advanced preprocessing.
## Model Details
- **Architecture**: resnet50
- **Dataset**: aptos
- **Preprocessing**: advanced
- **Training Date**: 20250618-162329
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: aptos_resnet50_20250618-162329_new
## Performance
- **Test Accuracy**: 0.7759562841530054
- **Test Quadratic Kappa**: 0.8835158192633705
- **Validation Kappa**: 0.8835158192633705
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-aptos-resnet50-advanced",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755618230
|
vwzyrraz7l
| 2025-08-19T16:13:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:13:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v3
|
concept-unlearning
| 2025-08-19T16:12:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:10:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_german_immigration2
|
AnonymousCS
| 2025-08-19T16:11:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:08:34Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_german_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_german_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
- Accuracy: 0.9077
- 1-f1: 0.8571
- 1-recall: 0.8372
- 1-precision: 0.8780
- Balanced Acc: 0.8899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3106 | 1.0 | 5 | 0.2904 | 0.9 | 0.8434 | 0.8140 | 0.875 | 0.8782 |
| 0.2247 | 2.0 | 10 | 0.3269 | 0.9 | 0.8471 | 0.8372 | 0.8571 | 0.8841 |
| 0.2308 | 3.0 | 15 | 0.3185 | 0.9077 | 0.8571 | 0.8372 | 0.8780 | 0.8899 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755619661
|
lqpl
| 2025-08-19T16:09:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:09:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zak1836/Tea-bar
|
zak1836
| 2025-08-19T16:07:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T16:07:40Z |
---
license: apache-2.0
---
|
mehdirafiei/bert_resume_category_prediction
|
mehdirafiei
| 2025-08-19T16:07:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:07:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dBrandt/ppo-LunarLander-v2
|
dBrandt
| 2025-08-19T16:06:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-19T16:05:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.67 +/- 48.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-trained5
|
ShimotsukiArc
| 2025-08-19T16:01:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained",
"base_model:finetune:ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:01:34Z |
---
base_model: ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShimotsukiArc
- **License:** apache-2.0
- **Finetuned from model :** ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rambetiko/blockassist-bc-soft_lanky_marmot_1755618848
|
rambetiko
| 2025-08-19T16:00:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:59:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_SVt_l24_lr2e-4_a256_2E_technical-engineering2_KLBPA_5e6
|
annasoli
| 2025-08-19T15:59:44Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T14:51:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/lfm2-vl-medieval-page-GGUF
|
mradermacher
| 2025-08-19T15:59:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:wjbmattingly/lfm2-vl-medieval-page",
"base_model:quantized:wjbmattingly/lfm2-vl-medieval-page",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T15:58:04Z |
---
base_model: wjbmattingly/lfm2-vl-medieval-page
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/wjbmattingly/lfm2-vl-medieval-page
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#lfm2-vl-medieval-page-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.2 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.mmproj-f16.gguf) | mmproj-f16 | 0.3 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AnonymousCS/xlmr_english_immigration2
|
AnonymousCS
| 2025-08-19T15:58:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T15:50:55Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_english_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_english_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.9692
- 1-f1: 0.9524
- 1-recall: 0.9302
- 1-precision: 0.9756
- Balanced Acc: 0.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6003 | 1.0 | 5 | 0.5644 | 0.6923 | 0.1667 | 0.0930 | 0.8 | 0.5408 |
| 0.5314 | 2.0 | 10 | 0.5335 | 0.7692 | 0.6154 | 0.5581 | 0.6857 | 0.7159 |
| 0.4865 | 3.0 | 15 | 0.3979 | 0.8769 | 0.7838 | 0.6744 | 0.9355 | 0.8257 |
| 0.3737 | 4.0 | 20 | 0.3145 | 0.9231 | 0.8889 | 0.9302 | 0.8511 | 0.9249 |
| 0.2679 | 5.0 | 25 | 0.2190 | 0.9615 | 0.9398 | 0.9070 | 0.975 | 0.9477 |
| 0.1533 | 6.0 | 30 | 0.1624 | 0.9615 | 0.9398 | 0.9070 | 0.975 | 0.9477 |
| 0.2047 | 7.0 | 35 | 0.1754 | 0.9462 | 0.9213 | 0.9535 | 0.8913 | 0.9480 |
| 0.3306 | 8.0 | 40 | 0.1451 | 0.9615 | 0.9398 | 0.9070 | 0.975 | 0.9477 |
| 0.1452 | 9.0 | 45 | 0.1380 | 0.9692 | 0.9524 | 0.9302 | 0.9756 | 0.9594 |
| 0.0369 | 10.0 | 50 | 0.1370 | 0.9692 | 0.9524 | 0.9302 | 0.9756 | 0.9594 |
| 0.059 | 11.0 | 55 | 0.1598 | 0.9615 | 0.9398 | 0.9070 | 0.975 | 0.9477 |
| 0.0153 | 12.0 | 60 | 0.1658 | 0.9692 | 0.9524 | 0.9302 | 0.9756 | 0.9594 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
shulin16/ea-dev-final
|
shulin16
| 2025-08-19T15:53:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"evaluation-agent",
"cot-reasoning",
"checkpoint",
"qwen2.5",
"video-assessment",
"image-assessment",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T09:18:53Z |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- text-generation
- evaluation-agent
- cot-reasoning
- checkpoint
- qwen2.5
- video-assessment
- image-assessment
library_name: transformers
pipeline_tag: text-generation
---
# ea-dev-final
This is checkpoint **final** (step 471) from fine-tuning [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for evaluation agent tasks.
## Checkpoint Details
- **Checkpoint**: final
- **Global Step**: 471
- **Epoch**: 3.00
- **Training Loss**: 0.8296
- **Learning Rate**: unknown
- **Base Model**: Qwen2.5-3B-Instruct
- **Task**: Multi-modal quality assessment with CoT reasoning
## Model Description
This checkpoint is from training an evaluation agent that can assess:
- **Video Quality**: Temporal consistency, motion smoothness, object consistency (VBench)
- **Image Quality**: Aesthetic quality, semantic alignment, visual fidelity (T2I-CompBench)
- **Open-ended Evaluation**: Custom quality assessment tasks
The model uses Chain-of-Thought (CoT) reasoning to provide detailed explanations for its evaluations.
## Files Included
This checkpoint contains:
- **Model Weights**: `model*.safetensors` - The actual model parameters
- **Tokenizer**: Complete tokenizer configuration and vocabulary
- **Configuration**: Model and generation configuration files
**Note**: This checkpoint contains only inference files (no optimizer states).
## Usage
### For Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the checkpoint
model = AutoModelForCausalLM.from_pretrained(
"ea-dev-final",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("ea-dev-final")
# Example evaluation prompt
prompt = """Please evaluate the quality of this video based on the following criteria:
1. Visual quality and clarity
2. Temporal consistency
3. Motion smoothness
Video description: A person walking through a park with trees swaying in the wind.
Let me think step by step:"""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=512,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Resume Training (if optimizer states included)
```bash
# Use with LLaMA-Factory
llamafactory-cli train \
--stage sft \
--model_name_or_path ea-dev-final \
--resume_from_checkpoint ea-dev-final
```
## Training Progress
This checkpoint represents an intermediate state in the training process:
- **Steps Completed**: 471
- **Epochs**: 3.00
- **Current Loss**: 0.8296
## Related Models
This checkpoint is part of a series. Other checkpoints from the same training run:
- Look for repositories with pattern: `ea-dev-checkpoint-*`
- Final model: `ea-dev-final`
## License
This model checkpoint is released under the Apache 2.0 license.
## Citation
If you use this checkpoint, please cite:
```bibtex
@misc{eval-agent-qwen2.5-checkpoint-471,
title={Evaluation Agent Qwen2.5 Checkpoint 471},
author={Your Name},
year={2025},
howpublished={\url{https://huggingface.co/ea-dev-final}}
}
```
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755618776
|
Elizavr
| 2025-08-19T15:53:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:53:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755617196
|
hakimjustbao
| 2025-08-19T15:53:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:53:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755617105
|
indoempatnol
| 2025-08-19T15:53:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:53:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755618596
|
0xGareeb
| 2025-08-19T15:51:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:51:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755618608
|
lqpl
| 2025-08-19T15:51:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:50:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1
|
MidnightRunner
| 2025-08-19T15:50:23Z | 406 | 2 |
diffusers
|
[
"diffusers",
"SDXL",
"noobai-XL",
"Vpred-1.0",
"text-to-image",
"ComfyUI",
"Automatic1111",
"Diffuser",
"en",
"dataset:LaxharLab/NoobAI-XL-dataset",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-02-02T01:09:01Z |
---
license: creativeml-openrail-m
language:
- en
base_model: Laxhar/noobai-XL-Vpred-1.0
tags:
- SDXL
- noobai-XL
- Vpred-1.0
- text-to-image
- ComfyUI
- Automatic1111
- Diffuser
pipeline_tag: text-to-image
library_name: diffusers
datasets:
- LaxharLab/NoobAI-XL-dataset
metrics:
- FID
- IS
widget:
- text: >-
high quality, masterpiece, detailed, 8K, artist:nyantcha,
evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles,
from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts,
woman dressed as white rabbit, sleek pure white outfit, delicate white bunny
ears, braid, plump, skindentation, huge breasts, falling into swirling black
hole, seen from behind, glancing over shoulder, alluring mysterious
expression, dress, zipper, zipper pull, detached sleeves, breasts apart
(shoulder straps), buckles, long dress, swirling cosmic patterns, glowing
particles, dramatic lighting, vibrant neon pink and blue tones,
hyper-detailed, cinematic depth of field, smooth texture, film grain,
chromatic aberration, high contrast, limited palette
parameters:
negative_prompt: >-
lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic,
greyscale, censored, jpeg artifacts, overly saturated, overly vivid,
(multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality,
jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia,
sketch, flat color, signature, artistic error, username, scan, (blurry,
lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist
name, (patreon username:1.2)
output:
url: stand_on_ripplewater.jpeg
---
# MIDNIGHT_NAI-XL_vPredV1
**Model Type:** Diffusion-based text-to-image generative model
**Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0
**License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE)
## Model Description
MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure.
## Usage Recommendations
### **Sampling Methods**
MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**.
Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**.
Other samplers may not provide stable results, and **V-prediction models do not support other samplers**.
### **CFG Scaling**
**Dynamic CFG Plugin is bypassed as a backup for potential future needs.**
Manually adjust **CFG scaling within a range of 3-4** for the best balance.
For optimal results, a **preferred setting of 3.5** is recommended.
### **Custom Workflow**
For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow.
This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline.
## MIDNIGHT1111_Chasm
For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline.

*Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.*
### Method I: reForge without MIDNIGHT1111_Chasm Workflow
1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up.
2. **Usage:** Launch WebUI and use the model as usual.
### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow
1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI).
2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance.
### Method III: WebUI without MIDNIGHT1111_Chasm Workflow
1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up.
2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder
command: |
```bash
cd stable-diffusion-webui
```
3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run:
command: |
```bash
git switch dev
git pull
```
⚠️ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch.
4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run:
command: |
```bash
git pull
```
🔄 **Restart WebUI after updating to apply changes.**"
5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs.
### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow
```bash
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerDiscreteScheduler
ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
ckpt_path,
use_safetensors=True,
torch_dtype=torch.float16,
)
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")
prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
num_inference_steps=28,
guidance_scale=5,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```
## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats
To enhance the model's performance and specificity, the following trigger word lists in CSV format are included:
- [`danbooru_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv)
- [`danbooru_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv)
- [`e621_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv)
- [`e621_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv)
These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation.
The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**.
- **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions.
- **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity.
- **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation.
## How to Use Wildcards
### For A1111
1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards)
2. **Place the `.txt` file in:**
```
/A1111/extensions/stable-diffusion-webui-wildcards
```
3. **Use in your prompt like this:**
```
__e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality
```
```
__danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality
```
```
__e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality
```
### For ComfyUI
1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
2. **Place the `.txt` file in:**
```
/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards
```
or
```
/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards
```
3. **Use the wildcard node to trigger dynamic randomization in your workflows.**
## What’s Included in Wildcards
TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI.
- [danbooru_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt)
- [danbooru_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt)
- [e621_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt)
- [e621_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt)
## Acknowledgments
Special thanks to:
- **Development Team:** Laxhar Lab
- **Coding Contributions:** Euge
- **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000)
- **Community Support:** Various contributors
## Additional Resources
- **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962)
- **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge)
- **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106)
- **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/)
## License
This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
|
WenFengg/21_14l4_19__8_
|
WenFengg
| 2025-08-19T15:49:16Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T15:32:34Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
jacoboss/MyGemmaNPC
|
jacoboss
| 2025-08-19T15:48:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T21:28:50Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jacoboss/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v2
|
concept-unlearning
| 2025-08-19T15:48:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:46:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF
|
tensorblock
| 2025-08-19T15:48:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:jan-hq/Qwen3-4B-v0.3-deepresearch-100-step",
"base_model:quantized:jan-hq/Qwen3-4B-v0.3-deepresearch-100-step",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T15:03:01Z |
---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: jan-hq/Qwen3-4B-v0.3-deepresearch-100-step
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## jan-hq/Qwen3-4B-v0.3-deepresearch-100-step - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [jan-hq/Qwen3-4B-v0.3-deepresearch-100-step](https://huggingface.co/jan-hq/Qwen3-4B-v0.3-deepresearch-100-step).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<think>
</think>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q2_K.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q2_K.gguf) | Q2_K | 1.669 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_S.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_S.gguf) | Q3_K_S | 1.887 GB | very small, high quality loss |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_M.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_M.gguf) | Q3_K_M | 2.076 GB | very small, high quality loss |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_L.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q3_K_L.gguf) | Q3_K_L | 2.240 GB | small, substantial quality loss |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q4_0.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q4_0.gguf) | Q4_0 | 2.370 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q4_K_S.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q4_K_S.gguf) | Q4_K_S | 2.383 GB | small, greater quality loss |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q4_K_M.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q4_K_M.gguf) | Q4_K_M | 2.497 GB | medium, balanced quality - recommended |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q5_0.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q5_0.gguf) | Q5_0 | 2.824 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q5_K_S.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q5_K_S.gguf) | Q5_K_S | 2.824 GB | large, low quality loss - recommended |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q5_K_M.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q5_K_M.gguf) | Q5_K_M | 2.890 GB | large, very low quality loss - recommended |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q6_K.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q6_K.gguf) | Q6_K | 3.306 GB | very large, extremely low quality loss |
| [Qwen3-4B-v0.3-deepresearch-100-step-Q8_0.gguf](https://huggingface.co/tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF/blob/main/Qwen3-4B-v0.3-deepresearch-100-step-Q8_0.gguf) | Q8_0 | 4.280 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF --include "Qwen3-4B-v0.3-deepresearch-100-step-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/jan-hq_Qwen3-4B-v0.3-deepresearch-100-step-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
aaron-ser/smolvla-two-cam-policy
|
aaron-ser
| 2025-08-19T15:43:55Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:aaron-ser/two-cam-dataset",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-12T14:48:55Z |
---
base_model: lerobot/smolvla_base
datasets: aaron-ser/two-cam-dataset
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
iamsubingyawali/gemma-3-4b-nepali-news-summarizer
|
iamsubingyawali
| 2025-08-19T15:42:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T08:08:57Z |
---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
library_name: transformers
model_name: gemma-3-4b-nepali-news-summarizer
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for gemma-3-4b-nepali-news-summarizer
This model is a fine-tuned version of [unsloth/gemma-3-4b-pt-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-4b-pt-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="iamsubingyawali/gemma-3-4b-nepali-news-summarizer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/iamsubingyawali-university-of-northampton/huggingface/runs/6gru05iy)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.53.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755616456
|
pempekmangedd
| 2025-08-19T15:41:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:41:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755616819
|
Sayemahsjn
| 2025-08-19T15:39:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755617735
|
Elizavr
| 2025-08-19T15:36:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:36:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755615849
|
chainway9
| 2025-08-19T15:33:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:33:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Fai1-GGUF
|
mradermacher
| 2025-08-19T15:31:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Lupakisyo/Fai1",
"base_model:quantized:Lupakisyo/Fai1",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-08-19T15:22:52Z |
---
base_model: Lupakisyo/Fai1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Lupakisyo/Fai1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Fai1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fai1-GGUF/resolve/main/Fai1.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
allenai/olmOCR-7B-0225-preview
|
allenai
| 2025-08-19T15:31:31Z | 258,271 | 693 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-01-15T21:14:47Z |
---
language:
- en
license: apache-2.0
datasets:
- allenai/olmOCR-mix-0225
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
new_version: allenai/olmOCR-7B-0825
---
<img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# olmOCR-7B-0225-preview
This is a preview release of the olmOCR model that's fine tuned from Qwen2-VL-7B-Instruct using the
[olmOCR-mix-0225](https://huggingface.co/datasets/allenai/olmOCR-mix-0225) dataset.
Quick links:
- 📃 [Paper](https://olmocr.allenai.org/papers/olmocr.pdf)
- 🤗 [Dataset](https://huggingface.co/datasets/allenai/olmOCR-mix-0225)
- 🛠️ [Code](https://github.com/allenai/olmocr)
- 🎮 [Demo](https://olmocr.allenai.org/)
The best way to use this model is via the [olmOCR toolkit](https://github.com/allenai/olmocr).
The toolkit comes with an efficient inference setup via sglang that can handle millions of documents
at scale.
## Usage
This model expects as input a single document image, rendered such that the longest dimension is 1024 pixels.
The prompt must then contain the additional metadata from the document, and the easiest way to generate this
is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
## Manual Prompting
If you want to prompt this model manually instead of using the [olmOCR toolkit](https://github.com/allenai/olmocr), please see the code below.
In normal usage, the olmOCR toolkit builds the prompt by rendering the PDF page, and
extracting relevant text blocks and image metadata. To duplicate that you will need to
```bash
pip install olmocr
```
and then run the following sample code.
```python
import torch
import base64
import urllib.request
from io import BytesIO
from PIL import Image
from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
from olmocr.data.renderpdf import render_pdf_to_base64png
from olmocr.prompts import build_finetuning_prompt
from olmocr.prompts.anchor import get_anchor_text
# Initialize the model
model = Qwen2VLForConditionalGeneration.from_pretrained("allenai/olmOCR-7B-0225-preview", torch_dtype=torch.bfloat16).eval()
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Grab a sample PDF
urllib.request.urlretrieve("https://molmo.allenai.org/paper.pdf", "./paper.pdf")
# Render page 1 to an image
image_base64 = render_pdf_to_base64png("./paper.pdf", 1, target_longest_image_dim=1024)
# Build the prompt, using document metadata
anchor_text = get_anchor_text("./paper.pdf", 1, pdf_engine="pdfreport", target_length=4000)
prompt = build_finetuning_prompt(anchor_text)
# Build the full prompt
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}},
],
}
]
# Apply the chat template and processor
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
main_image = Image.open(BytesIO(base64.b64decode(image_base64)))
inputs = processor(
text=[text],
images=[main_image],
padding=True,
return_tensors="pt",
)
inputs = {key: value.to(device) for (key, value) in inputs.items()}
# Generate the output
output = model.generate(
**inputs,
temperature=0.8,
max_new_tokens=50,
num_return_sequences=1,
do_sample=True,
)
# Decode the output
prompt_length = inputs["input_ids"].shape[1]
new_tokens = output[:, prompt_length:]
text_output = processor.tokenizer.batch_decode(
new_tokens, skip_special_tokens=True
)
print(text_output)
# ['{"primary_language":"en","is_rotation_valid":true,"rotation_correction":0,"is_table":false,"is_diagram":false,"natural_text":"Molmo and PixMo:\\nOpen Weights and Open Data\\nfor State-of-the']
```
## License and use
olmOCR is licensed under the Apache 2.0 license.
olmOCR is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
|
hanskarlo/dqn-SpaceInvadersNoFrameskip-v4
|
hanskarlo
| 2025-08-19T15:31:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-19T15:29:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 824.00 +/- 279.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanskarlo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanskarlo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hanskarlo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 48),
('buffer_size', 105000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sameddallaa/q_frozen_lake_v1_slippery
|
sameddallaa
| 2025-08-19T15:30:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-19T15:30:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_frozen_lake_v1_slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="sameddallaa/q_frozen_lake_v1_slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
saracandu/stldec_random
|
saracandu
| 2025-08-19T15:27:50Z | 41 | 0 | null |
[
"safetensors",
"stldec",
"custom_code",
"region:us"
] | null | 2025-05-24T14:38:45Z |
---
{}
---
# Materials for the paper "Bridging Logic and Learning: Decoding Temporal Logic Embeddings via Transformers" (Candussio et al.) @ ECML-PKDD 2025
**TL;DR:**
- (trained) models are available at: https://huggingface.co/collections/saracandu/stldec-ecml-pkdd-2025-686fe174a16915bc32aa53eb
- code, results, and other details can be found in this repo.
The goal of STLdecoder is to take a NeSy embedding of a Signal Temporal Logic (STL) formula and recover a semantically equivalent formula.
The `encoder.py` file allows you to obtain the NeSy embeddings of (a list of) formulae with respect to a predefined anchor set, which you can find in the `anchor_sets/` folder. More details on this procedure can be found at https://ebooks.iospress.nl/doi/10.3233/FAIA240638
This class also relies on the following files: `phis_generator.py`, `traj_measure.py`, `kernel.py`, `stl.py`, `anchor_set_generation.py`, `custom_typing.py`, `trajectories.py`.
The `decoder.py` component aims at translating a vector (i.e., the encoding of a formula, as done by `encoder.py`) into a string (i.e., an STL formula consisting of a hybrid syntax made of numbers, parentheses, and words, whose vocabulary can be found in the `tokenizer_files/` folder).
This is practically implemented in the `modeling_stldec.py` file, as we perform the aforementioned procedure using a decoder-only Transformer architecture. This process requires autoregressively generating the tokens of the STL formula and embedding them in order to merge this information with the initial semantic vector through the cross-attention block. The `configuration.py` file serves as a crystallized structure guiding the `transformers` classes.
In order to train this architecture, we can use the `training.py` file, leveraging the different training settings available in the `training_config/` folder.
|
Jacksss123/net72_uid241
|
Jacksss123
| 2025-08-19T15:25:31Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-04T19:56:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdoCleanCode/neox_capital_only_v2
|
AdoCleanCode
| 2025-08-19T15:25:09Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T10:13:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PavanSakthivel/ppo-LunarLander-v2
|
PavanSakthivel
| 2025-08-19T15:25:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-19T15:24:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.68 +/- 21.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755615291
|
hakimjustbao
| 2025-08-19T15:23:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:23:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755616839
|
lqpl
| 2025-08-19T15:22:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:21:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ucmp137538/best_RPT_coder_mathrl_ckpt-1000
|
ucmp137538
| 2025-08-19T15:22:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:19:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft
|
TAUR-dev
| 2025-08-19T15:20:09Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-19T15:18:45Z |
# M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft
This model was created as part of the **voting_setup3_1epch_1e6_all_tasks_only_sft** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: voting_setup3_1epch_1e6_all_tasks_only_sft
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_voting_setup3_1epch_1e6_all_tasks_only_sft_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datastor1/mwadhwa/skill_inject_outputs/sf_experiments/skills_in_rl/voting_setup3_1epch_1e6_all_tasks_only_sft/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__voting_setup3_1epch_1e6_all_tasks_only_sft__v1", "sf_eval_before_training": false, "sf_wandb_project": "voting_setup3_1epch_1e6_all_tasks_only_sft_sft", "sf_eval_steps": null, "run_name": "voting_setup3_1epch_1e6_all_tasks_only_sft_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__voting_setup3_1epch_1e6_all_tasks_only_sft__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft")
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755615136
|
mang3dd
| 2025-08-19T15:20:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:20:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bahrom1996/whisper-uz
|
Bahrom1996
| 2025-08-19T15:16:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"uz",
"dataset:common_voice_14_0",
"base_model:jmshd/whisper-uz",
"base_model:finetune:jmshd/whisper-uz",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-18T12:38:14Z |
---
library_name: transformers
language:
- uz
license: apache-2.0
base_model: jamshidahmadov/whisper-uz
tags:
- generated_from_trainer
datasets:
- common_voice_14_0
metrics:
- wer
model-index:
- name: Whisper base uz - Bahrom
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_14_0
type: common_voice_14_0
config: uz
split: test
args: 'config: uz, split: test'
metrics:
- name: Wer
type: wer
value: 39.4953893762244
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base uz - Bahrom
This model is a fine-tuned version of [jamshidahmadov/whisper-uz](https://huggingface.co/jamshidahmadov/whisper-uz) on the common_voice_14_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- Wer: 39.4954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.5759 | 0.1323 | 500 | 0.4621 | 39.4954 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.0
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Muapi/minimalist-line-art-sdxl-pony
|
Muapi
| 2025-08-19T15:16:02Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:15:50Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Minimalist Line Art (SDXL, Pony)

**Base model**: Flux.1 D
**Trained words**: ArsMJStyle, Minimalist Line Art
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:645070@789742", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755614936
|
koloni
| 2025-08-19T15:15:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:15:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kodetr/stunting-7B-Qwen
|
kodetr
| 2025-08-19T15:15:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"stunting",
"kesehatan",
"anak",
"conversational",
"id",
"dataset:kodetr/penelitian-fundamental-stunting-qa",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:finetune:Qwen/Qwen1.5-7B-Chat",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:59:41Z |
---
library_name: transformers
tags:
- stunting
- kesehatan
- anak
license: apache-2.0
datasets:
- kodetr/penelitian-fundamental-stunting-qa
language:
- id
metrics:
- rouge
- bleu
pipeline_tag: text-generation
base_model:
- Qwen/Qwen1.5-7B-Chat
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
Konsultasi(Q&A) stunting pada anak
- **Developed by:** Tanwir
- **Language :** Indonesia
### Training

### Use with transformers
Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer.
```python
import torch
from transformers import pipeline
model_id = "kodetr/stunting-7B-Qwen"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."},
{"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
|
Muapi/flux-christmas-living-room
|
Muapi
| 2025-08-19T15:14:26Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:14:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# FLUX Christmas living room

**Base model**: Flux.1 D
**Trained words**: christmas living room
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1011849@1134274", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/zavy-s-fluorescent-flux
|
Muapi
| 2025-08-19T15:11:56Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:11:43Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Zavy's Fluorescent - Flux

**Base model**: Flux.1 D
**Trained words**: zavy-flrscnt
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:737408@824658", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755616186
|
2hpsatt
| 2025-08-19T15:10:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:10:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755614471
|
thanobidex
| 2025-08-19T15:09:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:09:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Nexa-Vector-11-Qwen-GGUF
|
mradermacher
| 2025-08-19T15:09:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:iversonzhou/Nexa-Vector-11-Qwen",
"base_model:quantized:iversonzhou/Nexa-Vector-11-Qwen",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T14:56:35Z |
---
base_model: iversonzhou/Nexa-Vector-11-Qwen
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/iversonzhou/Nexa-Vector-11-Qwen
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nexa-Vector-11-Qwen-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Muapi/geometric-ce
|
Muapi
| 2025-08-19T15:09:27Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:09:18Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Geometric - CE

**Base model**: Flux.1 D
**Trained words**: gmtrcCE style, cubism, geometric, honeycomb, curvilinear
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:801170@895845", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755614551
|
sampingkaca72
| 2025-08-19T15:08:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:08:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/3d_flux-style
|
Muapi
| 2025-08-19T15:07:43Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:07:35Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 3D_Flux Style

**Base model**: Flux.1 D
**Trained words**: 3D01S , kawaii, anime
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:689478@771650", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Kurosawama/gemma-3-1b-it-Retranslation-align
|
Kurosawama
| 2025-08-19T15:07:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:07:28Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/pascal-blanch
|
Muapi
| 2025-08-19T15:06:50Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:06:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Pascal Blanché

**Base model**: Flux.1 D
**Trained words**: By Passcal Blanché
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1285926@1274884", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/stippled-illustration-flux-lora
|
Muapi
| 2025-08-19T15:06:21Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:05:37Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Stippled Illustration (Flux LoRA)

**Base model**: Flux.1 D
**Trained words**: STPPLD
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:772319@863812", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755614240
|
pempekmangedd
| 2025-08-19T15:06:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:06:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
numen-tech/Qwen3-4B-Instruct-2507-GPTQ-Int4
|
numen-tech
| 2025-08-19T15:06:19Z | 0 | 0 |
mlc-llm
|
[
"mlc-llm",
"text-generation",
"conversational",
"en",
"arxiv:2210.17323",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-19T15:01:34Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen3-4B-Instruct-2507
base_model_relation: quantized
library_name: mlc-llm
pipeline_tag: text-generation
---
4-bit [GPTQ](https://arxiv.org/abs/2210.17323) quantized version of [Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) for use with the [Private LLM app](https://privatellm.app/).
|
Muapi/3d-minimal-design-flux.1-dev-lora
|
Muapi
| 2025-08-19T15:04:06Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:03:47Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 3D Minimal Design - Flux.1 Dev Lora

**Base model**: Flux.1 D
**Trained words**: Minimalist Design
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:813341@1003935", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/gigachad-flux1.d-sdxl
|
Muapi
| 2025-08-19T15:03:05Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:02:54Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Gigachad - Flux1.D & SDXL

**Base model**: Flux.1 D
**Trained words**: Gigachad is a muscular man
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:237712@786259", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755613987
|
vwzyrraz7l
| 2025-08-19T15:03:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:02:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755614041
|
helmutsukocok
| 2025-08-19T15:01:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:01:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kurosawama/gemma-3-1b-it-Translation-align
|
Kurosawama
| 2025-08-19T15:01:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:01:43Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755613820
|
quantumxnode
| 2025-08-19T14:59:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:58:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wjbmattingly/lfm2-vl-450M-yiddish
|
wjbmattingly
| 2025-08-19T14:58:01Z | 0 | 0 | null |
[
"safetensors",
"lfm2-vl",
"custom_code",
"base_model:LiquidAI/LFM2-VL-450M",
"base_model:finetune:LiquidAI/LFM2-VL-450M",
"region:us"
] | null | 2025-08-19T14:57:50Z |
---
base_model:
- LiquidAI/LFM2-VL-450M
---
# model_step_13000
## Model Description
This model is a fine-tuned version of **LiquidAI/LFM2-VL-450M** using the brute-force-training package.
- **Base Model**: LiquidAI/LFM2-VL-450M
- **Training Status**: 🔄 In Progress
- **Generated**: 2025-08-19 10:41:14
- **Training Steps**: 13,000
## Training Details
### Dataset
- **Dataset**: johnlockejrr/yiddish_synth_v2
- **Training Examples**: 100,000
- **Validation Examples**: 4,999
### Training Configuration
- **Max Steps**: 100,000
- **Batch Size**: 15
- **Learning Rate**: 7e-05
- **Gradient Accumulation**: 1 steps
- **Evaluation Frequency**: Every 1,000 steps
### Current Performance
- **Training Loss**: 0.124526
- **Evaluation Loss**: 0.189137
## Pre-Training Evaluation
**Initial Model Performance (before training):**
- **Loss**: 2.626098
- **Perplexity**: 13.82
- **Character Accuracy**: 31.1%
- **Word Accuracy**: 12.9%
## Evaluation History
### All Checkpoint Evaluations
| Step | Checkpoint Type | Loss | Perplexity | Char Acc | Word Acc | Improvement vs Pre |
|------|----------------|------|------------|----------|----------|--------------------|
| Pre | pre_training | 2.6261 | 13.82 | 31.1% | 12.9% | +0.0% |
| 1,000 | checkpoint | 0.9395 | 2.56 | 20.1% | 4.1% | +64.2% |
| 2,000 | checkpoint | 0.8058 | 2.24 | 21.2% | 4.0% | +69.3% |
| 3,000 | checkpoint | 0.7305 | 2.08 | 23.0% | 6.1% | +72.2% |
| 4,000 | checkpoint | 0.6669 | 1.95 | 20.6% | 3.4% | +74.6% |
| 5,000 | checkpoint | 0.5341 | 1.71 | 21.4% | 3.6% | +79.7% |
| 6,000 | checkpoint | 0.4656 | 1.59 | 20.9% | 3.8% | +82.3% |
| 7,000 | checkpoint | 0.3917 | 1.48 | 21.4% | 3.5% | +85.1% |
| 8,000 | checkpoint | 0.3310 | 1.39 | 21.6% | 4.8% | +87.4% |
| 9,000 | checkpoint | 0.2892 | 1.34 | 20.7% | 4.0% | +89.0% |
| 10,000 | checkpoint | 0.2566 | 1.29 | 20.9% | 4.7% | +90.2% |
| 11,000 | checkpoint | 0.2199 | 1.25 | 20.2% | 4.9% | +91.6% |
| 12,000 | checkpoint | 0.2033 | 1.23 | 20.3% | 3.2% | +92.3% |
| 13,000 | checkpoint | 0.1891 | 1.21 | 19.4% | 3.4% | +92.8% |
## Training Progress
### Recent Training Steps (Loss Only)
| Step | Training Loss | Timestamp |
|------|---------------|-----------|
| 12,991 | 0.154684 | 2025-08-19T10:40 |
| 12,992 | 0.183019 | 2025-08-19T10:40 |
| 12,993 | 0.157314 | 2025-08-19T10:40 |
| 12,994 | 0.168899 | 2025-08-19T10:40 |
| 12,995 | 0.116096 | 2025-08-19T10:40 |
| 12,996 | 0.122316 | 2025-08-19T10:40 |
| 12,997 | 0.149480 | 2025-08-19T10:40 |
| 12,998 | 0.166267 | 2025-08-19T10:40 |
| 12,999 | 0.152927 | 2025-08-19T10:40 |
| 13,000 | 0.124526 | 2025-08-19T10:40 |
## Training Visualizations
### Training Progress and Evaluation Metrics

*This chart shows the training loss progression, character accuracy, word accuracy, and perplexity over time. Red dots indicate evaluation checkpoints.*
### Evaluation Comparison Across All Checkpoints

*Comprehensive comparison of all evaluation metrics across training checkpoints. Red=Pre-training, Blue=Checkpoints, Green=Final.*
### Available Visualization Files:
- **`training_curves.png`** - 4-panel view: Training loss with eval points, Character accuracy, Word accuracy, Perplexity
- **`evaluation_comparison.png`** - 4-panel comparison: Loss, Character accuracy, Word accuracy, Perplexity across all checkpoints
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# For vision-language models, use appropriate imports
model = AutoModelForCausalLM.from_pretrained("./model_step_13000")
tokenizer = AutoTokenizer.from_pretrained("./model_step_13000")
# Your inference code here
```
## Training Configuration
```json
{
"dataset_name": "johnlockejrr/yiddish_synth_v2",
"model_name": "LiquidAI/LFM2-VL-450M",
"max_steps": 100000,
"eval_steps": 1000,
"num_accumulation_steps": 1,
"learning_rate": 7e-05,
"train_batch_size": 15,
"val_batch_size": 1,
"train_select_start": 0,
"train_select_end": 100000,
"val_select_start": 100001,
"val_select_end": 105000,
"train_field": "train",
"val_field": "train",
"image_column": "image",
"text_column": "text",
"user_text": "Please transcribe all the Yiddish text you see in this historical manuscript image. Provide only the transcribed text without any additional commentary or description.",
"max_image_size": 250
}
```
## Model Card Metadata
- **Base Model**: LiquidAI/LFM2-VL-450M
- **Training Framework**: brute-force-training
- **Training Type**: Fine-tuning
- **License**: Inherited from base model
- **Language**: Inherited from base model
---
*This model card was automatically generated by brute-force-training on 2025-08-19 10:41:14*
|
Muapi/imax-70mm-cinematic-film-style-f1d-xl-sd1.5
|
Muapi
| 2025-08-19T14:57:36Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T14:57:27Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# IMAX 70mm cinematic film style F1D + XL + SD1.5

**Base model**: Flux.1 D
**Trained words**: cinematic film style, IMAX70mm , filmstrip border
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1249970@1409079", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755615379
|
Vasya777
| 2025-08-19T14:57:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:56:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matheoqtb/EuroBertV2final
|
matheoqtb
| 2025-08-19T14:56:59Z | 0 | 0 | null |
[
"safetensors",
"eurobert",
"custom_code",
"region:us"
] | null | 2025-08-19T14:56:50Z |
# Checkpoint exporté: final
Ce dépôt contient un checkpoint extrait de `matheoqtb/euroBertV2_test2` (sous-dossier `final`) et les fichiers de code nécessaires provenant de `EuroBERT/EuroBERT-610m`.
Chargement:
from transformers import AutoTokenizer, AutoModel
tok = AutoTokenizer.from_pretrained('<THIS_REPO>', trust_remote_code=True)
mdl = AutoModel.from_pretrained('<THIS_REPO>', trust_remote_code=True)
Tâche: feature-extraction (embeddings)
|
NadiaReula/Asistente-DEI
|
NadiaReula
| 2025-08-19T14:56:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-08-18T20:42:32Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
Azurastar2903/gemma-3-1b-pt-rk3588-1.2.1
|
Azurastar2903
| 2025-08-19T14:55:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T13:41:44Z |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# gemma-3-1b-pt-RK3588-1.2.1
This version of gemma-3-1b-pt has been converted to run on the RK3588 NPU using ['w8a8_g256'] quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.2.1
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, gemma-3-1b-pt, below:
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Usage
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline("text-generation", model="google/gemma-3-1b-pt", device="cuda", torch_dtype=torch.bfloat16)
output = pipe("Eiffel tower is located in", max_new_tokens=50)
```
#### Running the model on a single / multi GPU
```python
import torch
from transformers import AutoTokenizer, Gemma3ForCausalLM
ckpt = "google/gemma-3-1b-pt"
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = Gemma3ForCausalLM.from_pretrained(
ckpt,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "Eiffel tower is located in"
model_inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=50, do_sample=False)
generation = generation[0][input_len:]
decoded = tokenizer.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
|
Azurastar2903/gemma-3-1b-it-rk3588-1.2.1
|
Azurastar2903
| 2025-08-19T14:55:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T13:36:58Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# gemma-3-1b-it-RK3588-1.2.1
This version of gemma-3-1b-it has been converted to run on the RK3588 NPU using ['w8a8_g256'] quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.2.1
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, gemma-3-1b-it, below:
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Usage
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
```python
from transformers import pipeline
import torch
pipe = pipeline("text-generation", model="google/gemma-3-1b-it", device="cuda", torch_dtype=torch.bfloat16)
messages = [
[
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."},]
},
{
"role": "user",
"content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},]
},
],
]
output = pipe(messages, max_new_tokens=50)
```
#### Running the model on a single / multi GPU
```python
from transformers import AutoTokenizer, BitsAndBytesConfig, Gemma3ForCausalLM
import torch
model_id = "google/gemma-3-1b-it"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = Gemma3ForCausalLM.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
[
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."},]
},
{
"role": "user",
"content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},]
},
],
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device).to(torch.bfloat16)
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=64)
outputs = tokenizer.batch_decode(outputs)
```
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
|
matheoqtb/EuroBertV2180M_pairs
|
matheoqtb
| 2025-08-19T14:55:16Z | 0 | 0 | null |
[
"safetensors",
"eurobert",
"custom_code",
"region:us"
] | null | 2025-08-19T14:55:03Z |
# Checkpoint exporté: 180M_pairs
Ce dépôt contient un checkpoint extrait de `matheoqtb/euroBertV2_test2` (sous-dossier `180M_pairs`) et les fichiers de code nécessaires provenant de `EuroBERT/EuroBERT-610m`.
Chargement:
from transformers import AutoTokenizer, AutoModel
tok = AutoTokenizer.from_pretrained('<THIS_REPO>', trust_remote_code=True)
mdl = AutoModel.from_pretrained('<THIS_REPO>', trust_remote_code=True)
Tâche: feature-extraction (embeddings)
|
KMH158/t5-small-openassistant-chat
|
KMH158
| 2025-08-19T14:54:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T12:36:35Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-openassistant-chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-openassistant-chat
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3768 | 1.0 | 301 | 2.3842 |
| 2.6839 | 2.0 | 602 | 2.3277 |
| 2.6351 | 3.0 | 903 | 2.2995 |
| 2.6016 | 4.0 | 1204 | 2.2818 |
| 2.5803 | 5.0 | 1505 | 2.2680 |
| 2.5587 | 6.0 | 1806 | 2.2571 |
| 2.541 | 7.0 | 2107 | 2.2481 |
| 2.5323 | 8.0 | 2408 | 2.2409 |
| 2.5102 | 9.0 | 2709 | 2.2349 |
| 2.5063 | 10.0 | 3010 | 2.2288 |
| 2.4953 | 11.0 | 3311 | 2.2242 |
| 2.4926 | 12.0 | 3612 | 2.2192 |
| 2.4786 | 13.0 | 3913 | 2.2154 |
| 2.472 | 14.0 | 4214 | 2.2117 |
| 2.4662 | 15.0 | 4515 | 2.2079 |
| 2.4553 | 16.0 | 4816 | 2.2051 |
| 2.4472 | 17.0 | 5117 | 2.2020 |
| 2.4488 | 18.0 | 5418 | 2.2008 |
| 2.4367 | 19.0 | 5719 | 2.1972 |
| 2.4353 | 20.0 | 6020 | 2.1952 |
| 2.429 | 21.0 | 6321 | 2.1934 |
| 2.4247 | 22.0 | 6622 | 2.1912 |
| 2.4242 | 23.0 | 6923 | 2.1901 |
| 2.4196 | 24.0 | 7224 | 2.1887 |
| 2.4169 | 25.0 | 7525 | 2.1873 |
| 2.4122 | 26.0 | 7826 | 2.1862 |
| 2.4089 | 27.0 | 8127 | 2.1851 |
| 2.4042 | 28.0 | 8428 | 2.1841 |
| 2.4061 | 29.0 | 8729 | 2.1831 |
| 2.4007 | 30.0 | 9030 | 2.1823 |
| 2.397 | 31.0 | 9331 | 2.1814 |
| 2.3998 | 32.0 | 9632 | 2.1810 |
| 2.3963 | 33.0 | 9933 | 2.1805 |
| 2.3976 | 34.0 | 10234 | 2.1798 |
| 2.3919 | 35.0 | 10535 | 2.1794 |
| 2.3873 | 36.0 | 10836 | 2.1793 |
| 2.3899 | 37.0 | 11137 | 2.1789 |
| 2.3886 | 38.0 | 11438 | 2.1786 |
| 2.3906 | 39.0 | 11739 | 2.1786 |
| 2.393 | 40.0 | 12040 | 2.1785 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF
|
xiaoxingop
| 2025-08-19T14:51:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-19T14:51:49Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
tags:
- llama-cpp
- gguf-my-repo
---
# xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755613402
|
hakimjustbao
| 2025-08-19T14:51:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1755614989
|
zenqqq
| 2025-08-19T14:51:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:50:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755615038
|
lilTAT
| 2025-08-19T14:51:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:51:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prl90777/R1_Qwen3_8B_0719
|
prl90777
| 2025-08-19T14:48:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"lora",
"transformers",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"region:us"
] | null | 2025-08-19T11:31:10Z |
---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
tags:
- base_model:adapter:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
- lora
- transformers
model-index:
- name: R1_Qwen3_8B_0719
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# R1_Qwen3_8B_0719
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4267
- Map@3: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map@3 |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 26.6085 | 0.0523 | 20 | 1.3286 | 0.7507 |
| 9.7222 | 0.1046 | 40 | 1.0625 | 0.7933 |
| 7.9943 | 0.1569 | 60 | 0.8487 | 0.8183 |
| 7.4982 | 0.2092 | 80 | 0.8259 | 0.8315 |
| 6.7844 | 0.2615 | 100 | 0.7845 | 0.8407 |
| 6.1752 | 0.3138 | 120 | 0.7051 | 0.8571 |
| 5.3012 | 0.3661 | 140 | 0.6606 | 0.8683 |
| 4.7654 | 0.4184 | 160 | 0.5941 | 0.8830 |
| 5.3467 | 0.4707 | 180 | 0.6074 | 0.8771 |
| 4.4068 | 0.5230 | 200 | 0.5947 | 0.8880 |
| 4.9025 | 0.5754 | 220 | 0.5081 | 0.8986 |
| 4.3179 | 0.6277 | 240 | 0.5520 | 0.8941 |
| 4.4065 | 0.6800 | 260 | 0.4970 | 0.9040 |
| 3.7451 | 0.7323 | 280 | 0.4987 | 0.9045 |
| 4.4839 | 0.7846 | 300 | 0.4905 | 0.9085 |
| 3.5164 | 0.8369 | 320 | 0.4644 | 0.9067 |
| 3.9504 | 0.8892 | 340 | 0.4650 | 0.9066 |
| 3.6298 | 0.9415 | 360 | 0.4461 | 0.9106 |
| 3.6195 | 0.9938 | 380 | 0.4242 | 0.9173 |
| 3.0214 | 1.0445 | 400 | 0.5402 | 0.9058 |
| 2.7135 | 1.0968 | 420 | 0.4302 | 0.9203 |
| 2.6106 | 1.1491 | 440 | 0.4071 | 0.9252 |
| 2.8122 | 1.2014 | 460 | 0.4366 | 0.9188 |
| 3.0033 | 1.2537 | 480 | 0.4178 | 0.9230 |
| 2.59 | 1.3060 | 500 | 0.4116 | 0.9233 |
| 3.0395 | 1.3583 | 520 | 0.4267 | 0.9177 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
westlake-repl/ProTrek_650M
|
westlake-repl
| 2025-08-19T14:47:40Z | 19 | 4 |
transformers
|
[
"transformers",
"arxiv:2103.00020",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T02:48:33Z |
---
license: mit
---
**Github repo: https://github.com/westlake-repl/ProTrek**
## Overview
ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better
protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure.
During the pre-training phase, we calculate the InfoNCE loss for each two modalities as [CLIP](https://arxiv.org/abs/2103.00020)
does.
## Model architecture
**Protein sequence encoder**: [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D)
**Protein structure encoder**: foldseek_t30_150M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens)
**Text encoder**: [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)
## Obtain embeddings and calculate similarity score (please clone our repo first)
```
import torch
from model.ProtTrek.protrek_trimodal_model import ProTrekTrimodalModel
from utils.foldseek_util import get_struc_seq
# Load model
config = {
"protein_config": "weights/ProTrek_650M_UniRef50/esm2_t33_650M_UR50D",
"text_config": "weights/ProTrek_650M_UniRef50/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext",
"structure_config": "weights/ProTrek_650M_UniRef50/foldseek_t30_150M",
"load_protein_pretrained": False,
"load_text_pretrained": False,
"from_checkpoint": "weights/ProTrek_650M_UniRef50/ProTrek_650M_UniRef50.pt"
}
device = "cuda"
model = ProTrekTrimodalModel(**config).eval().to(device)
# Load protein and text
pdb_path = "example/8ac8.cif"
seqs = get_struc_seq("bin/foldseek", pdb_path, ["A"])["A"]
aa_seq = seqs[0]
foldseek_seq = seqs[1].lower()
text = "Replication initiator in the monomeric form, and autogenous repressor in the dimeric form."
with torch.no_grad():
# Obtain protein sequence embedding
seq_embedding = model.get_protein_repr([aa_seq])
print("Protein sequence embedding shape:", seq_embedding.shape)
# Obtain protein structure embedding
struc_embedding = model.get_structure_repr([foldseek_seq])
print("Protein structure embedding shape:", struc_embedding.shape)
# Obtain text embedding
text_embedding = model.get_text_repr([text])
print("Text embedding shape:", text_embedding.shape)
# Calculate similarity score between protein sequence and structure
seq_struc_score = seq_embedding @ struc_embedding.T / model.temperature
print("Similarity score between protein sequence and structure:", seq_struc_score.item())
# Calculate similarity score between protein sequence and text
seq_text_score = seq_embedding @ text_embedding.T / model.temperature
print("Similarity score between protein sequence and text:", seq_text_score.item())
# Calculate similarity score between protein structure and text
struc_text_score = struc_embedding @ text_embedding.T / model.temperature
print("Similarity score between protein structure and text:", struc_text_score.item())
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755613076
|
kojeklollipop
| 2025-08-19T14:46:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:46:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
umairmaliick/falcon-7b-instruct-taskpro-lora
|
umairmaliick
| 2025-08-19T14:45:49Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-19T13:53:18Z |
---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b-instruct
tags:
- base_model:adapter:tiiuae/falcon-7b-instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: falcon-7b-instruct-taskpro-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-instruct-taskpro-lora
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 3.2923 |
| No log | 2.0 | 2 | 3.2812 |
| No log | 3.0 | 3 | 3.2754 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755614706
|
lilTAT
| 2025-08-19T14:45:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:45:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trelis/Qwen3-4B_ds-arc-agi-2-perfect-100_test-c8
|
Trelis
| 2025-08-19T14:45:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:44:31Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755613191
|
lisaozill03
| 2025-08-19T14:45:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:45:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weikeduik/mozlegal
|
weikeduik
| 2025-08-19T14:42:52Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T14:42:52Z |
---
license: apache-2.0
---
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755614412
|
lilTAT
| 2025-08-19T14:40:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:40:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Feruru/Classifier
|
Feruru
| 2025-08-19T14:36:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T14:35:49Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.