modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
TencentARC/TokLIP
|
TencentARC
| 2025-08-21T17:45:55Z | 21 | 11 | null |
[
"Tokenizer",
"CLIP",
"UnifiedMLLM",
"image-text-to-text",
"en",
"arxiv:2505.05422",
"base_model:google/siglip2-so400m-patch16-256",
"base_model:finetune:google/siglip2-so400m-patch16-256",
"license:other",
"region:us"
] |
image-text-to-text
| 2025-06-05T06:36:47Z |
---
base_model:
- google/siglip2-so400m-patch16-384
- google/siglip2-so400m-patch16-256
language:
- en
license: other
license_name: other
license_link: https://github.com/TencentARC/TokLIP/blob/main/LICENSE
pipeline_tag: image-text-to-text
tags:
- Tokenizer
- CLIP
- UnifiedMLLM
---
# TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation
<h5 align="center">
[](https://arxiv.org/abs/2505.05422)
[](https://github.com/TencentARC/TokLIP)
[](https://huggingface.co/TencentARC/TokLIP)
[](https://github.com/TencentARC/TokLIP/blob/main/LICENSE)
<br>
</h5>
Welcome to the official code repository for "[**TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation**](https://arxiv.org/abs/2505.05422)".
Your star means a lot to us in developing this project! βββ
## π° News
* [2025/08/18] π Check our latest results on arXiv ([PDF](https://arxiv.org/pdf/2505.05422))!
* [2025/08/18] π₯ We release TokLIP XL with 512 resolution [π€ TokLIP_XL_512](https://huggingface.co/TencentARC/TokLIP/blob/main/TokLIP_XL_512.pt)!
* [2025/08/05] π₯ We release the training code!
* [2025/06/05] π₯ We release the code and models!
* [2025/05/09] π Our paper is available on arXiv!
## π Introduction
<img src="https://raw.githubusercontent.com/TencentARC/TokLIP/main/docs/TokLIP.png" alt="TokLIP" style="zoom:50%;" />
- We introduce TokLIP, a visual tokenizer that enhances comprehension by **semanticizing** vector-quantized (VQ) tokens and **incorporating CLIP-level semantics** while enabling end-to-end multimodal autoregressive training with standard VQ tokens.
- TokLIP integrates a low-level discrete VQ tokenizer with a ViT-based token encoder to capture high-level continuous semantics.
- Unlike previous approaches (e.g., VILA-U) that *discretize high-level features*, TokLIP **disentangles training objectives for comprehension and generation**, allowing the direct application of advanced VQ tokenizers without the need for tailored quantization operations.
## π§ Installation
```bash
conda create -n toklip python=3.10 -y
conda activate toklip
git clone https://github.com/TencentARC/TokLIP
pip install --upgrade pip
pip install -r requirements.txt
```
## βοΈ Usage
### Model Weight
| Model | Resolution | VQGAN | IN Top1 | COCO TR@1 | COCO IR@1 | Weight |
| :-------: | :--------: | :----------------------------------------------------------: | :-----: | :-------: | :-------: | :----------------------------------------------------------: |
| TokLIP-S | 256 | [LlamaGen](https://huggingface.co/peizesun/llamagen_t2i/blob/main/vq_ds16_t2i.pt) | 76.4 | 64.06 | 48.46 | [π€ TokLIP_S_256](https://huggingface.co/TencentARC/TokLIP/blob/main/TokLIP_S_256.pt) |
| TokLIP-L | 384 | [LlamaGen](https://huggingface.co/peizesun/llamagen_t2i/blob/main/vq_ds16_t2i.pt) | 80.0 | 68.00 | 52.87 | [π€ TokLIP_L_384](https://huggingface.co/TencentARC/TokLIP/blob/main/TokLIP_L_384.pt) |
| TokLIP-XL | 512 | [IBQ](https://huggingface.co/TencentARC/IBQ-Tokenizer-262144/blob/main/imagenet256_262144.ckpt) | 80.8 | 69.40 | 53.77 | [π€ TokLIP_XL_512](https://huggingface.co/TencentARC/TokLIP/blob/main/TokLIP_XL_512.pt) |
### Training
1. Please refer to [img2dataset](https://github.com/rom1504/img2dataset) to prepare the WebDataset required for training. You may choose datasets such as **CC3M**, **CC12M**, or **LAION**.
2. Prepare the teacher models using `src/covert.py`:
```bash
cd src
TIMM_MODEL='original' python covert.py --model_name 'ViT-SO400M-16-SigLIP2-256' --save_path './model/siglip2-so400m-vit-l16-256.pt'
TIMM_MODEL='original' python covert.py --model_name 'ViT-SO400M-16-SigLIP2-384' --save_path './model/siglip2-so400m-vit-l16-384.pt'
```
3. Train TokLIP using the scripts `src\train_toklip_256.sh` and `src\train_toklip_384.sh`. You need to set `--train-data` and `--train-num-samples` arguments accordingly.
### Evaluation
Please first download the TokLIP model weights.
We provide the evaluation scripts for ImageNet classification and MSCOCO Retrieval in `src\test_toklip_256.sh`, `src\test_toklip_384.sh`, and `src\test_toklip_512.sh`.
Please revise the `--pretrained`, `--imagenet-val`, and `--coco-dir` with your specific paths.
### Inference
We provide the inference example in `src/inference.py`.
```shell
cd src
python inference.py --model-config 'ViT-SO400M-16-SigLIP2-384-toklip' --pretrained 'YOUR_TOKLIP_PATH'
```
### Model Usage
We provide `build_toklip_encoder` function in `src/create_toklip.py`, you could directly load TokLIP with `model`, `image_size`, and `model_path` parameters.
## π TODOs
- [x] Release training codes.
- [x] Release TokLIP-XL with 512 resolution.
## π Contact
If you have further questions, please open an issue or contact <haokun.lin@cripac.ia.ac.cn>.
Discussions and potential collaborations are also welcome.
## π Acknowledgement
This repo is built upon the following projects:
* [OpenCLIP](https://github.com/mlfoundations/open_clip)
* [LlamaGen](https://github.com/FoundationVision/LlamaGen)
* [DeCLIP](https://github.com/Sense-GVT/DeCLIP)
* [SEED-Voken](https://github.com/TencentARC/SEED-Voken)
We thank the authors for their codes.
## π Citation
Please cite our work if you use our code or discuss our findings in your own research:
```bibtex
@article{lin2025toklip,
title={Toklip: Marry visual tokens to clip for multimodal comprehension and generation},
author={Lin, Haokun and Wang, Teng and Ge, Yixiao and Ge, Yuying and Lu, Zhichao and Wei, Ying and Zhang, Qingfu and Sun, Zhenan and Shan, Ying},
journal={arXiv preprint arXiv:2505.05422},
year={2025}
}
```
|
forkkyty/blockassist-bc-savage_stinging_opossum_1755798327
|
forkkyty
| 2025-08-21T17:45:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage stinging opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:45:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage stinging opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755796789
|
quantumxnode
| 2025-08-21T17:45:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:45:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755796755
|
unitova
| 2025-08-21T17:45:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:45:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
silverside/PNSNC_bot_1
|
silverside
| 2025-08-21T17:44:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-21T16:18:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PNSNC_BOT
---
# Pnsnc_Bot_1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PNSNC_BOT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PNSNC_BOT",
"lora_weights": "https://huggingface.co/silverside/PNSNC_bot_1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('silverside/PNSNC_bot_1', weight_name='lora.safetensors')
image = pipeline('PNSNC_BOT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0001
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/silverside/PNSNC_bot_1/discussions) to add images that show off what youβve made with this LoRA.
|
Team-Atom/smolvla_record_pp_ryb_t_96_100000
|
Team-Atom
| 2025-08-21T17:44:34Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Team-Atom/PiPl_RYB_test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-21T17:44:15Z |
---
base_model: lerobot/smolvla_base
datasets: Team-Atom/PiPl_RYB_test
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- lerobot
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
kambingijo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_scampering_camel
|
kambingijo
| 2025-08-21T17:44:02Z | 158 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am coiled_scampering_camel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:59:31Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am coiled_scampering_camel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_sft_cluster_soft_split_1_0.25
|
ChenWu98
| 2025-08-21T17:41:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T17:40:43Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_cluster_soft_split_1_0.25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_sft_cluster_soft_split_1_0.25
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/bqu43lw8)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
trungquanjqk/blockassist-bc-flightless_unseen_parrot_1755797206
|
trungquanjqk
| 2025-08-21T17:41:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:41:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Neural-Hacker/pygpt2
|
Neural-Hacker
| 2025-08-21T17:41:39Z | 0 | 1 | null |
[
"question-answering",
"en",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:mit",
"region:us"
] |
question-answering
| 2025-08-12T04:30:11Z |
---
license: mit
language:
- en
base_model:
- distilgpt2
pipeline_tag: question-answering
---
This fine-tuned model is designed to answer basic theoretical questions related to Python programming
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755797955
|
canoplos112
| 2025-08-21T17:41:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:39:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
video-viral-de-wendy-guevara-Clips/Ver.Filtran.video.intimo.de.Wendy.Guevara.en.twitter
|
video-viral-de-wendy-guevara-Clips
| 2025-08-21T17:41:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-21T17:40:50Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755796224
|
katanyasekolah
| 2025-08-21T17:39:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:39:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
videosoftware/snehamalapaka-lora
|
videosoftware
| 2025-08-21T17:38:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-21T17:36:16Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: snehamalapaka
license: other
---
# snehamalapaka lora
<Gallery />
## Model description
Custom LoRA trained on 4 personal videos of Sneha Malapaka for Wan 2.2 video generation.
## Trigger words
You should use `snehamalapaka` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/videosoftware/snehamalapaka-lora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/wan-trainer](https://fal.ai/models/fal-ai/wan-trainer).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755796244
|
thanobidex
| 2025-08-21T17:37:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:37:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755796135
|
vwzyrraz7l
| 2025-08-21T17:36:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:36:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/Llama-3.2-1B-Instruct_openbookqa
|
jahyungu
| 2025-08-21T17:35:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:23:25Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct_openbookqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct_openbookqa
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755797661
|
fatepurriyaz
| 2025-08-21T17:34:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:34:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755797536
|
canoplos112
| 2025-08-21T17:34:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:32:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hamza200420563/gemma-2b-bnb-4bit-StockAnalysis
|
Hamza200420563
| 2025-08-21T17:33:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T10:00:23Z |
---
base_model: unsloth/gemma-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Hamza200420563
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
urbainze/llama-3-8b-Instruct-frss
|
urbainze
| 2025-08-21T17:30:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T17:24:36Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** urbainze
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755795719
|
coelacanthxyz
| 2025-08-21T17:30:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:30:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755797251
|
ggozzy
| 2025-08-21T17:28:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:28:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmellwsdi03rttlqblhmo4fwj_cmelmezhw03t3tlqby4lqxyxe
|
BootesVoid
| 2025-08-21T17:28:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-21T17:28:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY1
---
# Cmellwsdi03Rttlqblhmo4Fwj_Cmelmezhw03T3Tlqby4Lqxyxe
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY1",
"lora_weights": "https://huggingface.co/BootesVoid/cmellwsdi03rttlqblhmo4fwj_cmelmezhw03t3tlqby4lqxyxe/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmellwsdi03rttlqblhmo4fwj_cmelmezhw03t3tlqby4lqxyxe', weight_name='lora.safetensors')
image = pipeline('SEXY1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmellwsdi03rttlqblhmo4fwj_cmelmezhw03t3tlqby4lqxyxe/discussions) to add images that show off what youβve made with this LoRA.
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755795856
|
lisaozill03
| 2025-08-21T17:28:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:28:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755795765
|
manusiaperahu2012
| 2025-08-21T17:28:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:28:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Sanskrit-Translate-V1.0-GGUF
|
mradermacher
| 2025-08-21T17:27:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:VinitT/Sanskrit-Translate-V1.0",
"base_model:quantized:VinitT/Sanskrit-Translate-V1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T17:24:03Z |
---
base_model: VinitT/Sanskrit-Translate-V1.0
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/VinitT/Sanskrit-Translate-V1.0
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Sanskrit-Translate-V1.0-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Sanskrit-Translate-V1.0-GGUF/resolve/main/Sanskrit-Translate-V1.0.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AAAAnsah/Qwen25-0.5B-rfa-vax-lmc-try-7
|
AAAAnsah
| 2025-08-21T17:27:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] |
text-generation
| 2025-08-21T17:27:47Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755797230
|
Dejiat
| 2025-08-21T17:27:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:27:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755797223
|
fatepurriyaz
| 2025-08-21T17:27:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:27:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755795585
|
koloni
| 2025-08-21T17:26:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:26:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
golopper/blockassist-bc-pensive_twitchy_ape_1755797144
|
golopper
| 2025-08-21T17:25:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive twitchy ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:25:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive twitchy ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VER-ORIGINAL-VIDEO-DE-MILICA-Y-ANGEL-DAVID/VER-Milica.y.Angel.David.Video.Debut.Erome.Video.de.Milica.y.Angel.David.ybanez.Jugar
|
VER-ORIGINAL-VIDEO-DE-MILICA-Y-ANGEL-DAVID
| 2025-08-21T17:25:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-21T17:20:55Z |
<animated-image data-catalyst=""><a href="https://newmovietv.online/leaked-video/?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
ΒΏMilica cumple a Γngel David?: ΒΏhubo debut prometido y video viral?
ΒΏMilica cumple a Γngel David? La promesa previo a Supernova se volviΓ³ viral y todos preguntan si realmente hubo debut. DescΓΊbrelo aquΓ.
Imagen de ΒΏMilica cumple a Γngel David?: ΒΏhubo debut prometido y video viral?
Tanto Milica como Γngel David han generado expectativa sobre si realmente habrΓ‘ un βdebutβ. - Foto: cortesΓa.
La pregunta sobre si Milica cumple a Γngel David la promesa de hacerlo βdebutarβ se volviΓ³ tendencia en internet y redes sociales, provocando bΓΊsquedas como βvideo de Milica y Γngel Davidβ, βMilica y Γngel Avidβ, βDebut de Γngel David y Milicaβ, βΒΏΓngel David debutΓ³ con Milicaβ, βΓngel David y Milica Telegramβ, βvideo viral de Milica y Γngel David debutβ, entre otras relacionadas.
La historia se viralizΓ³ tras su participaciΓ³n en el evento Supernova, donde una promesa aparentemente casual se transformΓ³ en uno de los momentos mΓ‘s compartidos de la noche.
View post on TikTok
ΒΏCuΓ‘l es la promesa de Milica a Γngel David?
Milica βstreamer e influencer argentinaβ hizo un TikTok para anunciar que, si ganaba su combate en Supernova, cumplirΓa lo que el comentario con mΓ‘s likes sugiriera. Γngel Avid (tambiΓ©n referido como Γngel David en redes) dejΓ³ un comentario pidiendo que lo βhiciera debutarβ. Ese comentario obtuvo mΓ‘s de tres millones de likes.
Tras ganar la pelea contra Mercedes Roa, Milica cumpliΓ³: lo llevΓ³ al ring, lo abrazΓ³ y dijo claramente: βMaΓ±ana vamos a tener que hablar de ese debut, me pareceβ.
View post on TikTok
LEA TAMBIΓN: Milica y Γngel David debut video: Clip viral completo en Twitter y Telegram
ΒΏCΓ³mo se conocieron Milica y Γngel David?
SegΓΊn diversos medios, Milica y Γngel David se encontraron βhoras antesβ del evento celebrado el pasado 17 de agosto de 2025, en el Palacio de los Deportes de Ciudad de MΓ©xico.
Fue un encuentro breve, pero suficiente para que, tras el triunfo, Milica lo invitara a subir al ring. De ese modo, el fan pasΓ³ a formar parte del espectΓ‘culo, y ese instante se volviΓ³ un sΓmbolo de lo impredecible que puede ser el contenido viral.
View post on TikTok
ΒΏHubo debut de Γngel David con Milica tras la pelea?
DespuΓ©s del combate, que Milica ganΓ³ por decisiΓ³n unΓ‘nime, el momento con Γngel David se convirtiΓ³ en el mΓ‘s viral del evento Supernova: OrΓgenes 2025. Milenio y otros portales lo destacaron como βel joven que βdebutarΓ‘β con Milicaβ.
View post on X
Su presencia en el ring, junto con el abrazo post victoria, generΓ³ risas, elogios y miles de comentarios entre los asistentes y en internet. A partir de ese momento, tanto Milica como Γngel David han generado expectativa sobre si realmente habrΓ‘ un βdebutβ formal o simplemente se tratΓ³ de un gesto simbΓ³lico e incluso con tono de humor.
Milica en Wikipedia
Milica, cuyo nombre real es Micaela IbÑñez, es una influencer y streamΒer argentina nacida en Buenos Aires, de aproximadamente 22 aΓ±os (generaciΓ³n de 2001). Tiene presencia destacada en Twitch (con mΓ‘s de 180 000 seguidores) y TikTok, donde comparte contenido sobre estilo de vida, rutinas de ejercicio y entretenimiento.
Su participaciΓ³n en Supernova marcΓ³ un antes y un despuΓ©s: no solo ganΓ³ en el ring, sino que se convirtiΓ³ en noticia por esa interacciΓ³n con un fan que terminΓ³ siendo viral.
View post on TikTok
ΒΏQuiΓ©n es Γngel David, fan de Milica?
Γngel David βtambiΓ©n conocido como Γngel Avidβ es un joven fan de Milica que se convirtiΓ³ en figura pΓΊblica de un dΓa para otro. Su comentario en TikTok pidiendo ser βdebutadoβ recibiΓ³ mΓ‘s de tres millones de likes, lo que lo catapultΓ³ a la fama inmediata. Hasta antes del evento, era un follower mΓ‘s; ahora, aparece en medios como βel verdadero ganadorβ de la noche.
ΒΏΓngel David debutΓ³ con Milica?
ΒΏMilica cumple a Γngel David?, lo llevΓ³ al ring, lo hizo parte del momento y confirmΓ³ pΓΊblicamente que hablarΓan sobre ese βdebutβ. Ahora bien, si se trata de un debut formal en sentido estricto βcomo una cita, colaboraciΓ³n o evento especialβ aΓΊn no estΓ‘ claro.
Hasta ahora, lo que sucediΓ³ fue un gesto simbΓ³lico, divertido y altamente viral. AsΓ que aunque tΓ©cnicamente cumpliΓ³ la promesa, todavΓa falta por saber quΓ© significa exactamente ese βdebutβ en el futuro.
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755795474
|
indoempatnol
| 2025-08-21T17:24:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:24:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755796981
|
ggozzy
| 2025-08-21T17:24:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:24:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755796973
|
fatepurriyaz
| 2025-08-21T17:23:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:23:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jgchaparro/language_garden-tsd-stt-merged-small
|
jgchaparro
| 2025-08-21T17:22:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/whisper-small",
"base_model:finetune:unsloth/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-21T17:21:36Z |
---
base_model: unsloth/whisper-small
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** jgchaparro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-small
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bila333/salon
|
Bila333
| 2025-08-21T17:20:12Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-21T16:44:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755796683
|
canoplos112
| 2025-08-21T17:20:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:18:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755796712
|
ggozzy
| 2025-08-21T17:19:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:19:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755796649
|
esi777
| 2025-08-21T17:17:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:17:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755796569
|
fatepurriyaz
| 2025-08-21T17:16:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:16:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen3-4B-P3-SFT-2-GGUF
|
mradermacher
| 2025-08-21T17:15:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"en",
"base_model:AiForgeMaster/Qwen3-4B-P3-SFT-2",
"base_model:quantized:AiForgeMaster/Qwen3-4B-P3-SFT-2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T16:49:16Z |
---
base_model: AiForgeMaster/Qwen3-4B-P3-SFT-2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- axolotl
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AiForgeMaster/Qwen3-4B-P3-SFT-2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-4B-P3-SFT-2-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-P3-SFT-2-GGUF/resolve/main/Qwen3-4B-P3-SFT-2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zonglin1104/svla_so100_pickplace_tape_to_basket_2
|
zonglin1104
| 2025-08-21T17:15:02Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:zonglin1104/lekiwi_pickup_tape_to_basket_2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-21T16:27:25Z |
---
base_model: lerobot/smolvla_base
datasets: zonglin1104/lekiwi_pickup_tape_to_basket_2
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/Gemma3-open-code-GGUF
|
mradermacher
| 2025-08-21T17:14:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:Frax01/Gemma3-open-code",
"base_model:quantized:Frax01/Gemma3-open-code",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T17:04:27Z |
---
base_model: Frax01/Gemma3-open-code
language:
- en
library_name: transformers
model_name: Gemma3-open-code
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Frax01/Gemma3-open-code
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma3-open-code-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-open-code-GGUF/resolve/main/Gemma3-open-code.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF
|
rayhaan-beeharry
| 2025-08-21T17:14:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-21T17:14:38Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youβre required to review and
agree to Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-1b-it
tags:
- llama-cpp
- gguf-my-repo
---
# rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-1b-it`](https://huggingface.co/google/gemma-3-1b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-1b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rayhaan-beeharry/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -c 2048
```
|
LINK-Uppal-Farm-Girl-Viral-Video-Original/WATCH.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
LINK-Uppal-Farm-Girl-Viral-Video-Original
| 2025-08-21T17:13:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-21T17:12:44Z |
<animated-image data-catalyst=""><a href="https://newmovietv.online/leaked-video/?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tgrhn/whisper-large-v3-turbo_finetuned-6
|
tgrhn
| 2025-08-21T17:12:57Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T17:12:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BjarneNPO/finetune_21_08_2025_18_55_50
|
BjarneNPO
| 2025-08-21T17:12:48Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gte",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:19964",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m-v2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-21T17:09:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:19964
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m-v2.0
widget:
- source_sentence: 'Kollegin hat Probleme mit dem Login zu '
sentences:
- Alle genannten Kinder gab es in kitaplus. Bei einem musste nur eine neue BI angelegt
werden, bei den anderen muss der Vertrag in einer anderen Kita rΓΌckgΓ€ngig gemacht
werden, damit es in kitaplus in dieser Einrichtung aus der Liste der Absagen genommen
werden kann.
- Der Bereich ist aktuell noch nicht sichtbar.
- muss mit dem Rentamt geklΓ€rt werden
- source_sentence: Benutzer mΓΆchte einen Kollegen nur fΓΌr die Dokumentenbibliothek
anlegen.
sentences:
- RΓΌcksprache mit Entwickler.
- Sie muss den Regler auf Anzahl stellen
- Zusammen die Rolle gewΓ€hlt und dort dann in den individuellen Rechten alles auf
lesend bzw. ausblenden gestellt, auΓer die Bibliothek.
- source_sentence: Ist es richtig so, dass Mitarbeiter, wenn sie nach einer gewissen
Zeit wieder in die Einrichtung kommen, erneut angelegt werden mΓΌssen?
sentences:
- Userin an den TrΓ€ger verwiesen, dieser kann bei ihr ein neues Passwort setzen.
- Ja, das ist korrekt so.
- Userin muss erst rechts ΓΌber das 3-Punkte-menΓΌ die "Anmeldedaten zusammenfΓΌhren".
Danach muss man in den angelegten BI die Gruppenform des Anmeldeportals angeben.
- source_sentence: Userin kann die Γffnungszeiten der Einrichtung nicht bearbeiten.
sentences:
- informiert, dass es keinen Testzugang gibt, aber HandbΓΌcher und Hilfen in zur
VerfΓΌgung stehen, wenn die Schnittstelle eingerichtet wurde.
- Bereits bekannt, die Kollegen sind schon dabei den Fehler zu beheben.
- Userin darf dies mit der Rolle nicht.
- source_sentence: fragt wie der Stand zu dem aktuellen Problem ist
sentences:
- Userin muss sich an die Bistums IT wenden.
- In KlΓ€rung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die
sind aber informiert und arbeiten bereits daran
- findet diese in der Γbersicht der Gruppen.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Snowflake/snowflake arctic embed m v2.0
type: Snowflake/snowflake-arctic-embed-m-v2.0
metrics:
- type: cosine_accuracy@1
value: 0.1897810218978102
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7153284671532847
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8029197080291971
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8540145985401459
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1897810218978102
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.44282238442822386
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4656934306569343
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.44598540145985405
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.008333162948877848
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09894770560292243
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.16592225698065116
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.23699966604646722
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.45898091811493363
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4676572818908586
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2943817650574986
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) <!-- at revision 95c2741480856aa9666782eb4afe11959938017f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- train
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'GteModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("BjarneNPO/finetune_21_08_2025_18_55_50")
# Run inference
queries = [
"fragt wie der Stand zu dem aktuellen Problem ist",
]
documents = [
'In KlΓ€rung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die sind aber informiert und arbeiten bereits daran',
'findet diese in der Γbersicht der Gruppen.',
'Userin muss sich an die Bistums IT wenden.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.2540, 0.0537, 0.0780]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `Snowflake/snowflake-arctic-embed-m-v2.0`
* Evaluated with <code>scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom</code> with these parameters:
```json
{
"query_prompt_name": "query",
"corpus_prompt_name": "query"
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.1898 |
| cosine_accuracy@3 | 0.7153 |
| cosine_accuracy@5 | 0.8029 |
| cosine_accuracy@10 | 0.854 |
| cosine_precision@1 | 0.1898 |
| cosine_precision@3 | 0.4428 |
| cosine_precision@5 | 0.4657 |
| cosine_precision@10 | 0.446 |
| cosine_recall@1 | 0.0083 |
| cosine_recall@3 | 0.0989 |
| cosine_recall@5 | 0.1659 |
| cosine_recall@10 | 0.237 |
| **cosine_ndcg@10** | **0.459** |
| cosine_mrr@10 | 0.4677 |
| cosine_map@100 | 0.2944 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: train
* Size: 19,964 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 27.77 tokens</li><li>max: 615 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------|
| <code>Wie kann man die JahresurlaubsΓΌbersicht exportieren?</code> | <code>ΓΌber das 3 Punkte MenΓΌ rechts oben. Mitarbeiter auswΓ€hlen und exportieren</code> |
| <code>1. VertragsabschlΓΌsse werden nicht ΓΌbertragen
<br>2. Kinder kommen nicht von nach
<br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket
<br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> |
| <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:------------------------------------------------------:|
| 0.1282 | 10 | 2.8279 | - |
| 0.2564 | 20 | 2.7011 | - |
| 0.3846 | 30 | 2.6182 | - |
| 0.5128 | 40 | 2.3893 | - |
| 0.6410 | 50 | 2.2499 | - |
| 0.7692 | 60 | 2.1048 | - |
| 0.8974 | 70 | 1.987 | - |
| 1.0 | 78 | - | 0.5043 |
| 1.0256 | 80 | 1.7766 | - |
| 1.1538 | 90 | 1.7516 | - |
| 1.2821 | 100 | 1.6332 | - |
| 1.4103 | 110 | 1.5975 | - |
| 1.5385 | 120 | 1.5437 | - |
| 1.6667 | 130 | 1.4739 | - |
| 1.7949 | 140 | 1.3988 | - |
| 1.9231 | 150 | 1.3845 | - |
| **2.0** | **156** | **-** | **0.4853** |
| 2.0513 | 160 | 1.2183 | - |
| 2.1795 | 170 | 1.2841 | - |
| 2.3077 | 180 | 1.2558 | - |
| 2.4359 | 190 | 1.2305 | - |
| 2.5641 | 200 | 1.2234 | - |
| 2.6923 | 210 | 1.1089 | - |
| 2.8205 | 220 | 1.1591 | - |
| 2.9487 | 230 | 1.0641 | - |
| 3.0 | 234 | - | 0.4735 |
| 3.0769 | 240 | 1.0085 | - |
| 3.2051 | 250 | 1.0507 | - |
| 3.3333 | 260 | 1.0183 | - |
| 3.4615 | 270 | 1.0208 | - |
| 3.5897 | 280 | 0.9587 | - |
| 3.7179 | 290 | 0.9273 | - |
| 3.8462 | 300 | 0.9171 | - |
| 3.9744 | 310 | 0.9076 | - |
| 4.0 | 312 | - | 0.4704 |
| 4.1026 | 320 | 0.8029 | - |
| 4.2308 | 330 | 0.8903 | - |
| 4.3590 | 340 | 0.8794 | - |
| 4.4872 | 350 | 0.851 | - |
| 4.6154 | 360 | 0.823 | - |
| 4.7436 | 370 | 0.7819 | - |
| 4.8718 | 380 | 0.7974 | - |
| 5.0 | 390 | 0.7552 | 0.4693 |
| 5.1282 | 400 | 0.7336 | - |
| 5.2564 | 410 | 0.7652 | - |
| 5.3846 | 420 | 0.7597 | - |
| 5.5128 | 430 | 0.7481 | - |
| 5.6410 | 440 | 0.6982 | - |
| 5.7692 | 450 | 0.6817 | - |
| 5.8974 | 460 | 0.7136 | - |
| 6.0 | 468 | - | 0.4652 |
| 6.0256 | 470 | 0.6233 | - |
| 6.1538 | 480 | 0.6739 | - |
| 6.2821 | 490 | 0.6646 | - |
| 6.4103 | 500 | 0.6614 | - |
| 6.5385 | 510 | 0.6699 | - |
| 6.6667 | 520 | 0.6291 | - |
| 6.7949 | 530 | 0.6344 | - |
| 6.9231 | 540 | 0.6459 | - |
| 7.0 | 546 | - | 0.4635 |
| 7.0513 | 550 | 0.5652 | - |
| 7.1795 | 560 | 0.6227 | - |
| 7.3077 | 570 | 0.6308 | - |
| 7.4359 | 580 | 0.6253 | - |
| 7.5641 | 590 | 0.6315 | - |
| 7.6923 | 600 | 0.5571 | - |
| 7.8205 | 610 | 0.6234 | - |
| 7.9487 | 620 | 0.5742 | - |
| 8.0 | 624 | - | 0.4611 |
| 8.0769 | 630 | 0.5583 | - |
| 8.2051 | 640 | 0.5817 | - |
| 8.3333 | 650 | 0.5913 | - |
| 8.4615 | 660 | 0.6025 | - |
| 8.5897 | 670 | 0.5726 | - |
| 8.7179 | 680 | 0.5492 | - |
| 8.8462 | 690 | 0.5907 | - |
| 8.9744 | 700 | 0.5756 | - |
| 9.0 | 702 | - | 0.4606 |
| 9.1026 | 710 | 0.5134 | - |
| 9.2308 | 720 | 0.5861 | - |
| 9.3590 | 730 | 0.6 | - |
| 9.4872 | 740 | 0.5839 | - |
| 9.6154 | 750 | 0.5688 | - |
| 9.7436 | 760 | 0.5443 | - |
| 9.8718 | 770 | 0.5687 | - |
| 10.0 | 780 | 0.5608 | 0.4590 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu129
- Accelerate: 1.10.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755796174
|
ggozzy
| 2025-08-21T17:10:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:10:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/zara-270m-combined-GGUF
|
mradermacher
| 2025-08-21T17:08:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:darkreapyre/zara-270m-combined",
"base_model:quantized:darkreapyre/zara-270m-combined",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T17:03:24Z |
---
base_model: darkreapyre/zara-270m-combined
language:
- en
library_name: transformers
model_name: gemma-3-270m-zara
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/darkreapyre/zara-270m-combined
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zara-270m-combined-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zara-270m-combined-GGUF/resolve/main/zara-270m-combined.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755795976
|
Dejiat
| 2025-08-21T17:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:06:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoanggnam197/blockassist-bc-flightless_unseen_parrot_1755795055
|
hoanggnam197
| 2025-08-21T17:06:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:06:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mk499/MyGemmaNPC
|
mk499
| 2025-08-21T17:06:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:49:17Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mk499/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755794423
|
sampingkaca72
| 2025-08-21T17:05:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:05:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
orange67/gpt-oss-1
|
orange67
| 2025-08-21T17:04:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gpt-oss-120b-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gpt-oss-120b-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-21T17:04:10Z |
---
base_model: unsloth/gpt-oss-120b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
forkkyty/blockassist-bc-tropical_barky_camel_1755795868
|
forkkyty
| 2025-08-21T17:04:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical barky camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:04:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical barky camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755794108
|
kojeklollipop
| 2025-08-21T17:03:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:03:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755794128
|
thanobidex
| 2025-08-21T17:03:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T17:03:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
drewskidang/federal_bert_2
|
drewskidang
| 2025-08-21T17:00:34Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"sentence-transformers",
"mteb",
"embedding",
"transformers.js",
"text-embeddings-inference",
"sentence-similarity",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-21T16:40:06Z |
---
license: apache-2.0
language:
- en
base_model:
- answerdotai/ModernBERT-base
base_model_relation: finetune
pipeline_tag: sentence-similarity
library_name: transformers
tags:
- sentence-transformers
- mteb
- embedding
- transformers.js
- text-embeddings-inference
---
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755794016
|
lisaozill03
| 2025-08-21T16:58:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:58:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LogicBombaklot/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-8Bit
|
LogicBombaklot
| 2025-08-21T16:58:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"custom_code",
"en",
"base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1_5",
"base_model:quantized:nvidia/Llama-3_3-Nemotron-Super-49B-v1_5",
"license:other",
"autotrain_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-21T16:55:01Z |
---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
- mlx
- mlx-my-repo
base_model: nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
---
# LogicBombaklot/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-8Bit
The Model [LogicBombaklot/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-8Bit](https://huggingface.co/LogicBombaklot/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-8Bit) was converted to MLX format from [nvidia/Llama-3_3-Nemotron-Super-49B-v1_5](https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("LogicBombaklot/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755793843
|
ihsanridzi
| 2025-08-21T16:58:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:58:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755795382
|
Dejiat
| 2025-08-21T16:57:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:56:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755793714
|
hakimjustbao
| 2025-08-21T16:56:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:56:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AITECHINDIA/llama3.1-8b-football-instruct-lora
|
AITECHINDIA
| 2025-08-21T16:52:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T15:48:37Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AITECHINDIA
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755795097
|
ggozzy
| 2025-08-21T16:52:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:52:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755793492
|
indoempatnol
| 2025-08-21T16:52:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:52:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755794849
|
lqpl
| 2025-08-21T16:51:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:48:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755794828
|
ggozzy
| 2025-08-21T16:48:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:48:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BjarneNPO/finetune_21_08_2025_18_35_25
|
BjarneNPO
| 2025-08-21T16:44:28Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gte",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:19964",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m-v2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-21T16:40:56Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:19964
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m-v2.0
widget:
- source_sentence: 'Kollegin hat Probleme mit dem Login zu '
sentences:
- Alle genannten Kinder gab es in kitaplus. Bei einem musste nur eine neue BI angelegt
werden, bei den anderen muss der Vertrag in einer anderen Kita rΓΌckgΓ€ngig gemacht
werden, damit es in kitaplus in dieser Einrichtung aus der Liste der Absagen genommen
werden kann.
- Der Bereich ist aktuell noch nicht sichtbar.
- muss mit dem Rentamt geklΓ€rt werden
- source_sentence: Benutzer mΓΆchte einen Kollegen nur fΓΌr die Dokumentenbibliothek
anlegen.
sentences:
- RΓΌcksprache mit Entwickler.
- Sie muss den Regler auf Anzahl stellen
- Zusammen die Rolle gewΓ€hlt und dort dann in den individuellen Rechten alles auf
lesend bzw. ausblenden gestellt, auΓer die Bibliothek.
- source_sentence: Ist es richtig so, dass Mitarbeiter, wenn sie nach einer gewissen
Zeit wieder in die Einrichtung kommen, erneut angelegt werden mΓΌssen?
sentences:
- Userin an den TrΓ€ger verwiesen, dieser kann bei ihr ein neues Passwort setzen.
- Ja, das ist korrekt so.
- Userin muss erst rechts ΓΌber das 3-Punkte-menΓΌ die "Anmeldedaten zusammenfΓΌhren".
Danach muss man in den angelegten BI die Gruppenform des Anmeldeportals angeben.
- source_sentence: Userin kann die Γffnungszeiten der Einrichtung nicht bearbeiten.
sentences:
- informiert, dass es keinen Testzugang gibt, aber HandbΓΌcher und Hilfen in zur
VerfΓΌgung stehen, wenn die Schnittstelle eingerichtet wurde.
- Bereits bekannt, die Kollegen sind schon dabei den Fehler zu beheben.
- Userin darf dies mit der Rolle nicht.
- source_sentence: fragt wie der Stand zu dem aktuellen Problem ist
sentences:
- Userin muss sich an die Bistums IT wenden.
- In KlΓ€rung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die
sind aber informiert und arbeiten bereits daran
- findet diese in der Γbersicht der Gruppen.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Snowflake/snowflake arctic embed m v2.0
type: Snowflake/snowflake-arctic-embed-m-v2.0
metrics:
- type: cosine_accuracy@1
value: 0.19708029197080293
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7226277372262774
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8029197080291971
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8759124087591241
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.19708029197080293
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.44525547445255476
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.46277372262773725
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43576642335766425
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.008762531776700945
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09805489105617915
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1603290464604333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.23250747987759582
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4532269034566889
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.47734040088054697
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2936078777768552
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) <!-- at revision 95c2741480856aa9666782eb4afe11959938017f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- train
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'GteModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("BjarneNPO/finetune_21_08_2025_18_35_25")
# Run inference
queries = [
"fragt wie der Stand zu dem aktuellen Problem ist",
]
documents = [
'In KlΓ€rung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die sind aber informiert und arbeiten bereits daran',
'findet diese in der Γbersicht der Gruppen.',
'Userin muss sich an die Bistums IT wenden.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.2744, 0.0387, 0.0701]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `Snowflake/snowflake-arctic-embed-m-v2.0`
* Evaluated with <code>scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom</code> with these parameters:
```json
{
"query_prompt_name": "query",
"corpus_prompt_name": "query"
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1971 |
| cosine_accuracy@3 | 0.7226 |
| cosine_accuracy@5 | 0.8029 |
| cosine_accuracy@10 | 0.8759 |
| cosine_precision@1 | 0.1971 |
| cosine_precision@3 | 0.4453 |
| cosine_precision@5 | 0.4628 |
| cosine_precision@10 | 0.4358 |
| cosine_recall@1 | 0.0088 |
| cosine_recall@3 | 0.0981 |
| cosine_recall@5 | 0.1603 |
| cosine_recall@10 | 0.2325 |
| **cosine_ndcg@10** | **0.4532** |
| cosine_mrr@10 | 0.4773 |
| cosine_map@100 | 0.2936 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: train
* Size: 19,964 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 27.77 tokens</li><li>max: 615 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------|
| <code>Wie kann man die JahresurlaubsΓΌbersicht exportieren?</code> | <code>ΓΌber das 3 Punkte MenΓΌ rechts oben. Mitarbeiter auswΓ€hlen und exportieren</code> |
| <code>1. VertragsabschlΓΌsse werden nicht ΓΌbertragen
<br>2. Kinder kommen nicht von nach
<br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket
<br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> |
| <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:------------------------------------------------------:|
| 0.1282 | 10 | 3.4817 | - |
| 0.2564 | 20 | 3.3293 | - |
| 0.3846 | 30 | 3.2454 | - |
| 0.5128 | 40 | 2.9853 | - |
| 0.6410 | 50 | 2.8363 | - |
| 0.7692 | 60 | 2.6833 | - |
| 0.8974 | 70 | 2.5117 | - |
| 1.0 | 78 | - | 0.5070 |
| 1.0256 | 80 | 2.297 | - |
| 1.1538 | 90 | 2.2586 | - |
| 1.2821 | 100 | 2.1379 | - |
| 1.4103 | 110 | 2.1199 | - |
| 1.5385 | 120 | 2.0054 | - |
| 1.6667 | 130 | 1.9546 | - |
| 1.7949 | 140 | 1.8525 | - |
| 1.9231 | 150 | 1.8471 | - |
| 2.0 | 156 | - | 0.4817 |
| 2.0513 | 160 | 1.6686 | - |
| 2.1795 | 170 | 1.7224 | - |
| 2.3077 | 180 | 1.7122 | - |
| 2.4359 | 190 | 1.6487 | - |
| 2.5641 | 200 | 1.631 | - |
| 2.6923 | 210 | 1.5296 | - |
| 2.8205 | 220 | 1.5704 | - |
| 2.9487 | 230 | 1.4634 | - |
| **3.0** | **234** | **-** | **0.4692** |
| 3.0769 | 240 | 1.3748 | - |
| 3.2051 | 250 | 1.4602 | - |
| 3.3333 | 260 | 1.4275 | - |
| 3.4615 | 270 | 1.4183 | - |
| 3.5897 | 280 | 1.3431 | - |
| 3.7179 | 290 | 1.3013 | - |
| 3.8462 | 300 | 1.3206 | - |
| 3.9744 | 310 | 1.2743 | - |
| 4.0 | 312 | - | 0.4699 |
| 4.1026 | 320 | 1.1575 | - |
| 4.2308 | 330 | 1.2629 | - |
| 4.3590 | 340 | 1.2729 | - |
| 4.4872 | 350 | 1.1957 | - |
| 4.6154 | 360 | 1.1674 | - |
| 4.7436 | 370 | 1.1349 | - |
| 4.8718 | 380 | 1.166 | - |
| 5.0 | 390 | 1.0891 | 0.4707 |
| 5.1282 | 400 | 1.0469 | - |
| 5.2564 | 410 | 1.124 | - |
| 5.3846 | 420 | 1.1325 | - |
| 5.5128 | 430 | 1.0691 | - |
| 5.6410 | 440 | 1.0255 | - |
| 5.7692 | 450 | 1.0164 | - |
| 5.8974 | 460 | 1.0451 | - |
| 6.0 | 468 | - | 0.4578 |
| 6.0256 | 470 | 0.9404 | - |
| 6.1538 | 480 | 1.0043 | - |
| 6.2821 | 490 | 0.9964 | - |
| 6.4103 | 500 | 1.013 | - |
| 6.5385 | 510 | 0.9772 | - |
| 6.6667 | 520 | 0.9544 | - |
| 6.7949 | 530 | 0.9659 | - |
| 6.9231 | 540 | 0.9629 | - |
| 7.0 | 546 | - | 0.4576 |
| 7.0513 | 550 | 0.8522 | - |
| 7.1795 | 560 | 0.9288 | - |
| 7.3077 | 570 | 0.9705 | - |
| 7.4359 | 580 | 0.9301 | - |
| 7.5641 | 590 | 0.9388 | - |
| 7.6923 | 600 | 0.8569 | - |
| 7.8205 | 610 | 0.9414 | - |
| 7.9487 | 620 | 0.8796 | - |
| 8.0 | 624 | - | 0.4542 |
| 8.0769 | 630 | 0.8504 | - |
| 8.2051 | 640 | 0.9054 | - |
| 8.3333 | 650 | 0.9035 | - |
| 8.4615 | 660 | 0.9167 | - |
| 8.5897 | 670 | 0.8546 | - |
| 8.7179 | 680 | 0.8508 | - |
| 8.8462 | 690 | 0.8945 | - |
| 8.9744 | 700 | 0.8676 | - |
| 9.0 | 702 | - | 0.4526 |
| 9.1026 | 710 | 0.7934 | - |
| 9.2308 | 720 | 0.889 | - |
| 9.3590 | 730 | 0.9205 | - |
| 9.4872 | 740 | 0.8947 | - |
| 9.6154 | 750 | 0.8679 | - |
| 9.7436 | 760 | 0.8545 | - |
| 9.8718 | 770 | 0.8878 | - |
| 10.0 | 780 | 0.8483 | 0.4532 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu129
- Accelerate: 1.10.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755794618
|
Dejiat
| 2025-08-21T16:44:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:44:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hartular/roLlama3-Instruct-Grammar-01FF
|
hartular
| 2025-08-21T16:43:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"base_model:finetune:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T16:43:32Z |
---
base_model: OpenLLM-Ro/RoLlama3.1-8b-Instruct
library_name: transformers
model_name: roLlama3-Instruct-Grammar-01FF
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for roLlama3-Instruct-Grammar-01FF
This model is a fine-tuned version of [OpenLLM-Ro/RoLlama3.1-8b-Instruct](https://huggingface.co/OpenLLM-Ro/RoLlama3.1-8b-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755794419
|
lqpl
| 2025-08-21T16:43:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:41:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755793049
|
quantumxnode
| 2025-08-21T16:43:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:43:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Links-videos-Uppal-Farm-Girl-Viral-Clip/Orginal.full.Videos.uppal.farm.girl.viral.video.Official.Tutorial
|
Links-videos-Uppal-Farm-Girl-Viral-Clip
| 2025-08-21T16:42:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-21T16:42:41Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Avinash-panda/bert-sentiment-imdb
|
Avinash-panda
| 2025-08-21T16:41:26Z | 0 | 0 | null |
[
"safetensors",
"bert",
"region:us"
] | null | 2025-08-21T16:11:19Z |
# BERT for Sentiment Classification (IMDB Dataset)
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the IMDB movie reviews dataset for **binary sentiment classification** (positive / negative).
## Model Details
- **Base model:** `bert-base-uncased`
- **Fine-tuning task:** Sentiment Analysis
- **Dataset:** IMDB (binary labels: positive / negative)
- **Language(s):** English
- **Framework:** PyTorch + Hugging Face Transformers
## Training
- **Batch size:** 16
- **Epochs:** 2
- **Optimizer:** AdamW
- **Learning rate:** 5e-5
- **Max sequence length:** 256
- **Training time:** ~X minutes on Google Colab GPU
## Evaluation
- **Test Accuracy:** 83.6%
## Usage
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="Avinash-panda/bert-sentiment-imdb")
print(classifier("The movie was absolutely fantastic!"))
# [{'label': 'POSITIVE', 'score': 0.98}]
|
Columbidae/Ministral-8B-Instruct-2410
|
Columbidae
| 2025-08-21T16:41:24Z | 0 | 0 |
vllm
|
[
"vllm",
"safetensors",
"mistral",
"mistral-common",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"license:other",
"region:us"
] | null | 2025-08-21T16:40:34Z |
---
library_name: vllm
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: >-
# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose
that is not expressly authorized under this Agreement, You must request a
license from Mistral AI, which Mistral AI may grant to You in Mistral AI's
sole discretion. To discuss such a license, please contact Mistral AI via the
website contact form: https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use,
modification, or Distribution of any Mistral Model by You, regardless of the
source You obtained a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral
Model, or by creating, using or distributing a Derivative of the Mistral
Model, You agree to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement
on behalf of Your employer or another person or entity, You warrant and
represent that You have the authority to act and accept this Agreement on
their behalf. In such a case, the word "You" in this Agreement will refer to
Your employer or such other person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby
grants You a non-exclusive, royalty-free, worldwide, non-sublicensable,
non-transferable, limited license to use, copy, modify, and Distribute under
the conditions provided in Section 2.2 below, the Mistral Model and any
Derivatives made by or for Mistral AI and to create Derivatives of the Mistral
Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral
AI.** Subject to Section 3 below, You may Distribute copies of the Mistral
Model and/or Derivatives made by or for Mistral AI, under the following
conditions: You must make available a copy of this Agreement to third-party
recipients of the Mistral Models and/or Derivatives made by or for Mistral AI
you Distribute, it being specified that any rights to use the Mistral Models
and/or Derivatives made by or for Mistral AI shall be directly granted by
Mistral AI to said third-party recipients pursuant to the Mistral AI Research
License agreement executed between these parties; You must retain in all
copies of the Mistral Models the following attribution notice within a
"Notice" text file distributed as part of such copies: "Licensed by Mistral AI
under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3
below, You may Distribute any Derivatives made by or for You under additional
or different terms and conditions, provided that: In any event, the use and
modification of Mistral Model and/or Derivatives made by or for Mistral AI
shall remain governed by the terms and conditions of this Agreement; You
include in any such Derivatives made by or for You prominent notices stating
that You modified the concerned Mistral Model; and Any terms and conditions
You impose on any third-party recipients relating to Derivatives made by or
for You shall neither limit such third-party recipients' use of the Mistral
Model or any Derivatives made by or for Mistral AI in accordance with the
Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any
means, that the Derivatives made by or for You and/or any modified version of
the Mistral Model You Distribute under your name and responsibility is an
official product of Mistral AI or has been endorsed, approved or validated by
Mistral AI, unless You are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives
(whether or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement,
and in connection with the Mistral Models, You may not use any name or mark
owned by or associated with Mistral AI or any of its affiliates, except (i) as
required for reasonable and customary use in describing and Distributing the
Mistral Models and Derivatives made by or for Mistral AI and (ii) for
attribution purposes as required by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are
solely responsible for the Outputs You generate and their subsequent uses in
accordance with this Agreement. Any Outputs shall be subject to the
restrictions set out in Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any
Derivatives that You may create or that may be created for You shall be
subject to the restrictions set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable
law (such as deliberate and grossly negligent acts) or agreed to in writing,
shall Mistral AI be liable to You for damages, including any direct, indirect,
special, incidental, or consequential damages of any character arising as a
result of this Agreement or out of the use or inability to use the Mistral
Models and Derivatives (including but not limited to damages for loss of data,
loss of goodwill, loss of expected profit or savings, work stoppage, computer
failure or malfunction, or any damage caused by malware or security breaches),
even if Mistral AI has been advised of the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI
from and against any claims, damages, or losses arising out of or related to
Your use or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by
Mistral AI in writing, Mistral AI provides the Mistral Models and Derivatives
on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied, including, without limitation, any warranties or
conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. Mistral AI does not represent nor warrant that the Mistral
Models and Derivatives will be error-free, meet Your or any third party's
requirements, be secure or will allow You or any third party to achieve any
kind of result or generate any kind of content. You are solely responsible for
determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights
under this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance
of this Agreement or access to the concerned Mistral Models or Derivatives and
will continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if
You are in breach of this Agreement. Upon termination of this Agreement, You
must cease to use all Mistral Models and Derivatives and shall permanently
delete any copy thereof. The following provisions, in their relevant parts,
will survive any termination or expiration of this Agreement, each for the
duration necessary to achieve its own intended purpose (e.g. the liability
provision will survive until the end of the applicable limitation
period):Sections 5 (Liability), 6(Warranty), 7 (Termination) and 8 (General
Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against
Us or any other entity (including a cross-claim or counterclaim in a lawsuit),
alleging that the Model or a Derivative, or any part thereof, infringe upon
intellectual property or other rights owned or licensable by You, then any
licenses granted to You under this Agreement will immediately terminate as of
the date such legal action or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of
France, without regard to choice of law principles, and the UN Convention on
Contracts for the International Sale of Goods does not apply to this
Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive
jurisdiction of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be
invalid, illegal or unenforceable, the remaining provisions shall be
unaffected thereby and remain valid as if such provision had not been set
forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the
access, use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including
but not limited to any customized or fine-tuned version thereof), (ii) work
based on the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means
supplying, providing or making available, by any means, a copy of the Mistral
Models and/or the Derivatives as the case may be, subject to Section 3 of this
Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French sociΓ©tΓ© par actions
simplifiΓ©e registered in the Paris commercial registry under the number 952
418 325, and having its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its
elements which include algorithms, software, instructed checkpoints,
parameters, source code (inference code, evaluation code and, if applicable,
fine-tuning code) and any other elements associated thereto made available by
Mistral AI under this Agreement, including, if any, the technical
documentation, manuals and instructions for the use and operation thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output
that is solely for (a) personal, scientific or academic research, and (b) for
non-profit and non-commercial purposes, and not directly or indirectly
connected to any commercial activities or business operations. For
illustration purposes, Research Purposes does not include (1) any usage of the
Mistral Model, Derivative or Output by individuals or contractors employed in
or engaged by companies in the context of (a) their daily tasks, or (b) any
activity (including but not limited to any testing or proof-of-concept) that
is intended to generate revenue, nor (2) any Distribution by a commercial
entity of the Mistral Model, Derivative or Output whether in return for
payment or free of charge, in any medium or form, including but not limited to
through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or
behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models
or the Derivatives from a prompt (i.e., text instructions) provided by users.
For the avoidance of doubt, Outputs do not include any components of a Mistral
Models, such as any fine-tuned versions of the Mistral Models, the weights, or
parameters.
"You": means the individual or entity entering into this Agreement with
Mistral AI.
*Mistral AI processes your personal data below to provide the model and
enforce its license. If you are affiliated with a commercial entity, we may
also send you communications about our models. For more information on your
rights and data handling, please see our <a
href="https://mistral.ai/terms/">privacy policy</a>.*
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
I understand that if I am a commercial entity, I am not permitted to use or distribute the model internally or externally, or expose it in my own offerings without a commercial license: checkbox
I understand that if I upload the model, or any derivative version, on any platform, I must include the Mistral Research License: checkbox
I understand that for commercial use of the model, I can contact Mistral or use the Mistral AI API on la Plateforme or any of our cloud provider partners: checkbox
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Mistral Privacy Policy: checkbox
geo: ip_location
extra_gated_description: >-
Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send
you communications about our models. For more information on your rights and
data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
tags:
- mistral-common
---
# Model Card for Ministral-8B-Instruct-2410
We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.
The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, [reach out to us](https://mistral.ai/contact/).
For more details about les Ministraux please refer to our release [blog post](https://mistral.ai/news/ministraux).
## Ministral 8B Key features
- Released under the **Mistral Research License**, reach out to us for a commercial license
- Trained with a **128k context window** with **interleaved sliding-window attention**
- Trained on a large proportion of **multilingual and code data**
- Supports **function calling**
- Vocabulary size of **131k**, using the **V3-Tekken** tokenizer
### Basic Instruct Template (V3-Tekken)
```
<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]
```
*For more information about the tokenizer please refer to [mistral-common](https://github.com/mistralai/mistral-common)*
## Ministral 8B Architecture
| Feature | Value |
|:---------------------:|:--------------------:|
| **Architecture** | Dense Transformer |
| **Parameters** | 8,019,808,256 |
| **Layers** | 36 |
| **Heads** | 32 |
| **Dim** | 4096 |
| **KV Heads (GQA)** | 8 |
| **Hidden Dim** | 12288 |
| **Head Dim** | 128 |
| **Vocab Size** | 131,072 |
| **Context Length** | 128k |
| **Attention Pattern** | Ragged (128k,32k,32k,32k) |
## Benchmarks
#### Base Models
<u>Knowledge & Commonsense</u>
| Model | MMLU | AGIEval | Winogrande | Arc-c | TriviaQA |
|:-------------:|:------:|:---------:|:------------:|:-------:|:----------:|
| Mistral 7B Base | 62.5 | 42.5 | 74.2 | 67.9 | 62.5 |
| Llama 3.1 8B Base | 64.7 | 44.4 | 74.6 | 46.0 | 60.2 |
| ***Ministral 8B Base*** | ***<u>65.0</u>*** | ***<u>48.3</u>*** | ***<u>75.3</u>*** | ***<u>71.9</u>*** | ***<u>65.5</u>*** |
| | | | | | |
| Gemma 2 2B Base | 52.4 | 33.8 | 68.7 | 42.6 | 47.8 |
| Llama 3.2 3B Base | 56.2 | 37.4 | 59.6 | 43.1 | 50.7 |
| ***Ministral 3B Base*** | ***<u>60.9</u>*** | ***<u>42.1</u>*** | ***<u>72.7</u>*** | ***<u>64.2</u>*** | ***<u>56.7</u>*** |
<u>Code & Math</u>
| Model | HumanEval pass@1 |GSM8K maj@8 |
|:-------------:|:-------------------:|:---------------:|
| Mistral 7B Base | 26.8 | 32.0 |
| Llama 3.1 8B Base | ***<u>37.8</u>*** | 42.2 |
| ***Ministral 8B Base*** | 34.8 | ***<u>64.5</u>*** |
| | | |
| Gemma 2 2B | 20.1 | 35.5 |
| Llama 3.2 3B | 14.6 | 33.5 |
| ***Ministral 3B*** | ***<u>34.2</u>*** | ***<u>50.9</u>*** |
<u>Multilingual</u>
| Model | French MMLU | German MMLU | Spanish MMLU |
|:-------------:|:-------------:|:-------------:|:-------------:|
| Mistral 7B Base | 50.6 | 49.6 | 51.4 |
| Llama 3.1 8B Base | 50.8 | 52.8 | 54.6 |
| ***Ministral 8B Base*** | ***<u>57.5</u>*** | ***<u>57.4</u>*** | ***<u>59.6</u>*** |
| | | | |
| Gemma 2 2B Base | 41.0 | 40.1 | 41.7 |
| Llama 3.2 3B Base | 42.3 | 42.2 | 43.1 |
| ***Ministral 3B Base*** | ***<u>49.1</u>*** | ***<u>48.3</u>*** | ***<u>49.5</u>*** |
### Instruct Models
<u>Chat/Arena (gpt-4o judge)</u>
| Model | MTBench | Arena Hard | Wild bench |
|:-------------:|:---------:|:------------:|:------------:|
| Mistral 7B Instruct v0.3 | 6.7 | 44.3 | 33.1 |
| Llama 3.1 8B Instruct | 7.5 | 62.4 | 37.0 |
| Gemma 2 9B Instruct | 7.6 | 68.7 | ***<u>43.8</u>*** |
| ***Ministral 8B Instruct*** | ***<u>8.3</u>*** | ***<u>70.9</u>*** | 41.3 |
| | | | |
| Gemma 2 2B Instruct | 7.5 | 51.7 | 32.5 |
| Llama 3.2 3B Instruct | 7.2 | 46.0 | 27.2 |
| ***Ministral 3B Instruct*** | ***<u>8.1</u>*** | ***<u>64.3</u>*** | ***<u>36.3</u>*** |
<u>Code & Math</u>
| Model | MBPP pass@1 | HumanEval pass@1 | Math maj@1 |
|:-------------:|:-------------:|:------------------:|:-------------:|
| Mistral 7B Instruct v0.3 | 50.2 | 38.4 | 13.2 |
| Gemma 2 9B Instruct | 68.5 | 67.7 | 47.4 |
Llama 3.1 8B Instruct | 69.7 | 67.1 | 49.3 |
| ***Ministral 8B Instruct*** | ***<u>70.0</u>*** | ***<u>76.8</u>*** | ***<u>54.5</u>*** |
| | | | |
| Gemma 2 2B Instruct | 54.5 | 42.7 | 22.8 |
| Llama 3.2 3B Instruct | 64.6 | 61.0 | 38.4 |
| ***Ministral 3B* Instruct** | ***<u>67.7</u>*** | ***<u>77.4</u>*** | ***<u>51.7</u>*** |
<u>Function calling</u>
| Model | Internal bench |
|:-------------:|:-----------------:|
| Mistral 7B Instruct v0.3 | 6.9 |
| Llama 3.1 8B Instruct | N/A |
| Gemma 2 9B Instruct | N/A |
| ***Ministral 8B Instruct*** | ***<u>31.6</u>*** |
| | |
| Gemma 2 2B Instruct | N/A |
| Llama 3.2 3B Instruct | N/A |
| ***Ministral 3B Instruct*** | ***<u>28.4</u>*** |
## Usage Examples
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
> [!IMPORTANT]
> Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM.
> Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated.
> To take advantage of the full 128k context size we recommend [Mistral Inference](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410#mistral-inference)
**_Installation_**
Make sure you install `vLLM >= v0.6.4`:
```
pip install --upgrade vllm
```
Also make sure you have `mistral_common >= 1.4.4` installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile).
**_Offline_**
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
model_name = "mistralai/Ministral-8B-Instruct-2410"
sampling_params = SamplingParams(max_tokens=8192)
# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?"
messages = [
{
"role": "user",
"content": prompt
},
]
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2,
# and you can easily add these two numbers in your mind very quickly without any delay.
```
**_Server_**
You can also use Ministral-8B in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral
```
**Note:** Running Ministral-8B on a single GPU requires 24 GB of GPU RAM.
If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2`
2. And ping the client:
```
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer token' \
--data '{
"model": "mistralai/Ministral-8B-Instruct-2410",
"messages": [
{
"role": "user",
"content": "Do we need to think for 10 seconds to find the answer of 1 + 1?"
}
]
}'
```
### Mistral-inference
We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model.
**_Install_**
Make sure to have `mistral_inference >= 1.5.0` installed.
```
pip install mistral_inference --upgrade
```
**_Download_**
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256
```
### Passkey detection
> [!IMPORTANT]
> In this example the passkey message has over >100k tokens and mistral-inference
> does not have a chunked pre-fill mechanism. Therefore you will need a lot of
> GPU memory in order to run the below example (80 GB). For a more memory-efficient
> solution we recommend using vLLM.
```py
from mistral_inference.transformer import Transformer
from pathlib import Path
import json
from mistral_inference.generate import generate
from huggingface_hub import hf_hub_download
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
def load_passkey_request() -> ChatCompletionRequest:
passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json")
with open(passkey_file, "r") as f:
data = json.load(f)
message_content = data["messages"][0]["content"]
return ChatCompletionRequest(messages=[UserMessage(content=message_content)])
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False)
completion_request = load_passkey_request()
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result) # The pass key is 13005.
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
tekken = tokenizer.instruct_tokenizer.tokenizer
tekken.special_token_policy = SpecialTokenPolicy.IGNORE
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## The Mistral AI Team
Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
Trelis/Qwen3-4B_ds-arc-agi-1-perfect-50_test-c4
|
Trelis
| 2025-08-21T16:41:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:39:54Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755792747
|
mang3dd
| 2025-08-21T16:38:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:38:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
OpenVINO/Phi-3.5-vision-instruct-fp16-ov
|
OpenVINO
| 2025-08-21T16:38:38Z | 128 | 0 | null |
[
"openvino",
"phi3_v",
"nlp",
"code",
"vision",
"image-text-to-text",
"conversational",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-3.5-vision-instruct",
"base_model:finetune:microsoft/Phi-3.5-vision-instruct",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-01-21T06:07:13Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-vision-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: image-text-to-text
tags:
- nlp
- code
- vision
base_model:
- microsoft/Phi-3.5-vision-instruct
---
# Phi-3.5-vision-instruct-ov-fp16
* Model creator: [Microsoft](https://huggingface.co/microsoft)
* Original model: [Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Description
This is [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) model converted to the [OpenVINOβ’ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
## Compatibility
The provided OpenVINOβ’ IR model is compatible with:
* OpenVINO version 2025.0.0 and higher
* Optimum Intel 1.21.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install --pre -U --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release openvino_tokenizers openvino
pip install git+https://github.com/huggingface/optimum-intel.git
```
2. Run model inference
```
from PIL import Image
import requests
from optimum.intel.openvino import OVModelForVisualCausalLM
from transformers import AutoProcessor, TextStreamer
model_id = "OpenVINO/Phi-3.5-vision-instruct-fp16-ov"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
ov_model = OVModelForVisualCausalLM.from_pretrained(model_id, trust_remote_code=True)
prompt = "<|image_1|>\nWhat is unusual on this picture?"
url = "https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/d5fbbd1a-d484-415c-88cb-9986625b7b11"
image = Image.open(requests.get(url, stream=True).raw)
inputs = ov_model.preprocess_inputs(text=prompt, image=image, processor=processor)
generation_args = {
"max_new_tokens": 50,
"temperature": 0.0,
"do_sample": False,
"streamer": TextStreamer(processor.tokenizer, skip_prompt=True, skip_special_tokens=True)
}
generate_ids = ov_model.generate(**inputs,
eos_token_id=processor.tokenizer.eos_token_id,
**generation_args
)
generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
response = processor.batch_decode(generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False)[0]
```
## Limitations
Check the original [model card](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) for limitations.
## Legal information
The original model is distributed under [MIT](https://huggingface.co/microsoft/Phi-3.5-vision-instruct/blob/main/LICENSE) license. More details can be found in [original model card](https://huggingface.co/microsoft/Phi-3.5-vision-instruct).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755794260
|
Dejiat
| 2025-08-21T16:38:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:38:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF
|
mradermacher
| 2025-08-21T16:37:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:Samreth/SFT-GRPO-Qwen-1.7B-2CoT",
"base_model:quantized:Samreth/SFT-GRPO-Qwen-1.7B-2CoT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T16:23:30Z |
---
base_model: Samreth/SFT-GRPO-Qwen-1.7B-2CoT
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Samreth/SFT-GRPO-Qwen-1.7B-2CoT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SFT-GRPO-Qwen-1.7B-2CoT-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SFT-GRPO-Qwen-1.7B-2CoT-GGUF/resolve/main/SFT-GRPO-Qwen-1.7B-2CoT.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755793943
|
lqpl
| 2025-08-21T16:36:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:33:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755792576
|
unitova
| 2025-08-21T16:35:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:35:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
UKPLab/Qwen2.5-3b-spare-prm-math
|
UKPLab
| 2025-08-21T16:35:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"PRM",
"SPARE-PRM",
"MATH",
"Reward Model",
"Process Reward Model",
"Reasoning",
"Mathematical Reasoning",
"Verifier",
"Process Supervision",
"Process Verifier",
"conversational",
"arxiv:2506.15498",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:13:00Z |
---
library_name: transformers
tags:
- PRM
- SPARE-PRM
- MATH
- Reward Model
- Process Reward Model
- Reasoning
- Mathematical Reasoning
- Verifier
- Process Supervision
- Process Verifier
---
# Model Card for SPARE-PRM
Process Reward Model (Qwen2.5-3b) used in our [SPARE](https://arxiv.org/abs/2506.15498) paper. This model was trained only on the generations from the MATH dataset.
## Model Details
`Input`: instruction + question + step-by-step solutions with a special step tag `ΠΊΠΈ`.
`Output`: the logits. Post-process it to achieve the score of each step.
## Usage
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
import torch
incorrect_token = "-"
correct_token = "+"
step_tag = " ΠΊΠΈ" # space in the beginning required for correct tag tokenization
tokenizer = AutoTokenizer.from_pretrained("UKPLab/Qwen2.5-3b-spare-prm-math")
step_target_ids = tokenizer.convert_tokens_to_ids([incorrect_token, correct_token])
step_tag_id = tokenizer.encode(step_tag)[-1]
device = "cuda:0"
model = AutoModelForCausalLM.from_pretrained("UKPLab/Qwen2.5-3b-spare-prm-math").to(device).eval()
# include this instruction as it was left as it is during the PRM training.
instruction = "You are an expert at solving challenging math problems spanning across various categories and difficulties such as Algebra, Number Theory, Geometry, Counting and Probability, Precalculus etc. For a given math problem, your task is to generate a step-by-step reasoning-based solution providing an answer to the question. Identify the correct concepts, formulas and heuristics that needs to be applied and then derive the contents of the reasoning steps from the given contexts and accurate calculations from the previous reasoning steps."
question = "Yann and Camille go to a restaurant. </S>\nIf there are 10 items on the menu, and each orders one dish, how many different combinations of meals can Yann and Camille order if they refuse to order the same dish? (It does matter who orders what---Yann ordering chicken and Camille ordering fish is different from Yann ordering fish and Camille ordering chicken.)"
correct_generation = "Let's think step by step.\nYann can order 1 of the 10 dishes. ΠΊΠΈ\nWhen he picks a dish, there are 9 left for Camille to choose from. ΠΊΠΈ\nThus, there are $10\\cdot 9=\\boxed{90}$ possible combinations.\nHence, the answer is 90. ΠΊΠΈ\n"
incorrect_generation = "Let's think step by step.\nWithout any restrictions, Yann and Camille could both order the same dish out of the 10 options, for a total of $10 \\cdot 9$ dishes. ΠΊΠΈ\nHowever, since Yann orders one of the 9 dishes that Camille didn't order (and vice versa), the number of possible combinations becomes $10 \\cdot 9 - 8 = \\boxed{72}$.\nHence, the answer is 72. ΠΊΠΈ\n"
for generation in (correct_generation, incorrect_generation):
message = [
dict(role="system", content=instruction),
dict(role="user", content=question),
dict(role="assistant", content=generation),
]
input_ids = tokenizer.apply_chat_template(message, tokenize=True, return_tensors="pt").to(device)
with torch.no_grad():
logits = model(input_ids).logits[:,:,step_target_ids]
scores = logits.softmax(dim=-1)[:,:,1] # correct_token at index 1 in the step_target_ids
step_scores = scores[input_ids == step_tag_id]
print(step_scores)
# tensor([0.8710, 0.9163, 0.9786]) - correct_generation
# tensor([0.3292, 0.5288]) - incorrect_generation
```
<!-- ### Model Description -->
<!-- Provide a longer summary of what this model is. -->
<!-- This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated. -->
<!-- - **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed] -->
<!-- ### Model Sources [optional] -->
<!-- Provide the basic links for the model. -->
<!-- - **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
<!-- ## Uses -->
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- [More Information Needed] -->
<!-- ## Bias, Risks, and Limitations -->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- [More Information Needed] -->
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. -->
<!-- ## How to Get Started with the Model -->
<!-- Use the code below to get started with the model. -->
<!-- [More Information Needed] -->
<!-- ## Training Details -->
<!-- ### Training Data -->
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed] -->
<!-- ### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
<!-- #### Training Hyperparameters -->
<!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed] -->
<!-- #### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed] -->
<!-- ### Results -->
<!-- [More Information Needed] -->
<!-- #### Summary -->
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->
<!-- ## Technical Specifications [optional] -->
<!-- ### Model Architecture and Objective -->
<!-- [More Information Needed] -->
<!-- ### Compute Infrastructure -->
<!-- [More Information Needed] -->
<!-- #### Hardware -->
<!-- [More Information Needed] -->
<!-- #### Software -->
<!-- [More Information Needed] -->
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:** -->
Please cite this data using:
```
@misc{rizvi2025sparesinglepassannotationreferenceguided,
title={SPARE: Single-Pass Annotation with Reference-Guided Evaluation for Automatic Process Supervision and Reward Modelling},
author={Md Imbesat Hassan Rizvi and Xiaodan Zhu and Iryna Gurevych},
year={2025},
eprint={2506.15498},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.15498},
}
```
<!-- **APA:** -->
<!-- [More Information Needed] -->
<!-- ## Glossary [optional] -->
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
<!-- [More Information Needed] -->
<!-- ## More Information [optional] -->
<!-- [More Information Needed] -->
<!-- ## Model Card Authors [optional] -->
<!-- [More Information Needed] -->
<!-- ## Model Card Contact -->
<!-- [More Information Needed] -->
|
zhilyaev/gpt2
|
zhilyaev
| 2025-08-21T16:34:31Z | 0 | 0 | null |
[
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"exbert",
"en",
"license:mit",
"region:us"
] | null | 2025-08-21T16:32:37Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we donβt support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755793854
|
yaelahnal
| 2025-08-21T16:32:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:31:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755793752
|
ggozzy
| 2025-08-21T16:30:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:30:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AITUNED/gpt2-finetuned-test
|
AITUNED
| 2025-08-21T16:29:03Z | 0 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:24:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755792078
|
kojeklollipop
| 2025-08-21T16:28:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:28:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755792130
|
thanobidex
| 2025-08-21T16:28:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:28:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
viraja1/banking-gguf
|
viraja1
| 2025-08-21T16:26:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:quantized:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T16:16:33Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** viraja1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755793551
|
Dejiat
| 2025-08-21T16:26:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:26:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vjkhambe/dqn-SpaceInvadersNoFrameskip-v4
|
vjkhambe
| 2025-08-21T16:25:16Z | 32 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T21:15:46Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 640.00 +/- 89.39
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vjkhambe -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vjkhambe -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vjkhambe
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Trelis/Qwen3-4B_ds-arc-agi-2-perfect-50_test-c4
|
Trelis
| 2025-08-21T16:24:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T16:23:16Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1755793302
|
pietro0hz
| 2025-08-21T16:23:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious toothy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:23:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious toothy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vinukatashidu/gemma3_lora_merged
|
vinukatashidu
| 2025-08-21T16:22:38Z | 0 | 0 | null |
[
"safetensors",
"gemma3_text",
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T16:11:39Z |
---
license: apache-2.0
---
|
jahyungu/Qwen2.5-Coder-1.5B-Instruct_openbookqa
|
jahyungu
| 2025-08-21T16:22:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T14:45:53Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-Coder-1.5B-Instruct_openbookqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-Coder-1.5B-Instruct_openbookqa
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755793273
|
Dejiat
| 2025-08-21T16:21:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T16:21:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.