modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 00:36:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
540 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 00:36:27
card
stringlengths
11
1.01M
tensorblock/payelb_GPT2L_full-GGUF
tensorblock
2025-08-12T05:07:57Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "TensorBlock", "GGUF", "base_model:payelb/GPT2L_full", "base_model:quantized:payelb/GPT2L_full", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-08-12T04:59:00Z
--- library_name: transformers license: mit base_model: payelb/GPT2L_full tags: - generated_from_trainer - TensorBlock - GGUF model-index: - name: GPT2L_full results: [] --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co) [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2) [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock) [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock) ## payelb/GPT2L_full - GGUF <div style="text-align: left; margin: 20px 0;"> <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Join our Discord to learn more about what we're building ↗ </a> </div> This repo contains GGUF format model files for [payelb/GPT2L_full](https://huggingface.co/payelb/GPT2L_full). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277). ## Our projects <table border="1" cellspacing="0" cellpadding="10"> <tr> <th colspan="2" style="font-size: 25px;">Forge</th> </tr> <tr> <th colspan="2"> <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> </th> </tr> <tr> <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> </tr> <tr> <th colspan="2"> <a href="https://github.com/TensorBlock/forge" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">🚀 Try it now! 🚀</a> </th> </tr> <tr> <th style="font-size: 25px;">Awesome MCP Servers</th> <th style="font-size: 25px;">TensorBlock Studio</th> </tr> <tr> <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> </tr> <tr> <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> </tr> <tr> <th> <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">👀 See what we built 👀</a> </th> <th> <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">👀 See what we built 👀</a> </th> </tr> </table> ## Prompt template ``` Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format. ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [GPT2L_full-Q2_K.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q2_K.gguf) | Q2_K | 0.324 GB | smallest, significant quality loss - not recommended for most purposes | | [GPT2L_full-Q3_K_S.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q3_K_S.gguf) | Q3_K_S | 0.366 GB | very small, high quality loss | | [GPT2L_full-Q3_K_M.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q3_K_M.gguf) | Q3_K_M | 0.431 GB | very small, high quality loss | | [GPT2L_full-Q3_K_L.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q3_K_L.gguf) | Q3_K_L | 0.466 GB | small, substantial quality loss | | [GPT2L_full-Q4_0.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q4_0.gguf) | Q4_0 | 0.460 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [GPT2L_full-Q4_K_S.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q4_K_S.gguf) | Q4_K_S | 0.464 GB | small, greater quality loss | | [GPT2L_full-Q4_K_M.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q4_K_M.gguf) | Q4_K_M | 0.513 GB | medium, balanced quality - recommended | | [GPT2L_full-Q5_0.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q5_0.gguf) | Q5_0 | 0.549 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [GPT2L_full-Q5_K_S.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q5_K_S.gguf) | Q5_K_S | 0.549 GB | large, low quality loss - recommended | | [GPT2L_full-Q5_K_M.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q5_K_M.gguf) | Q5_K_M | 0.588 GB | large, very low quality loss - recommended | | [GPT2L_full-Q6_K.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q6_K.gguf) | Q6_K | 0.643 GB | very large, extremely low quality loss | | [GPT2L_full-Q8_0.gguf](https://huggingface.co/tensorblock/payelb_GPT2L_full-GGUF/blob/main/GPT2L_full-Q8_0.gguf) | Q8_0 | 0.830 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/payelb_GPT2L_full-GGUF --include "GPT2L_full-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/payelb_GPT2L_full-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
cucucu666/ganga-8.12
cucucu666
2025-08-12T05:07:34Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-Fill-dev", "base_model:adapter:black-forest-labs/FLUX.1-Fill-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T03:11:37Z
--- base_model: black-forest-labs/FLUX.1-Fill-dev library_name: diffusers license: other instance_prompt: Lego male face, Lego style, embarrassed expression, plain white background widget: - text: Lego male face, Lego style, embarrassed expression, plain white background output: url: image_0.png - text: Lego male face, Lego style, embarrassed expression, plain white background output: url: image_1.png - text: Lego male face, Lego style, embarrassed expression, plain white background output: url: image_2.png - text: Lego male face, Lego style, embarrassed expression, plain white background output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux-Fill DreamBooth LoRA - cucucu666/ganga-8.12 <Gallery /> ## Model description These are cucucu666/ganga-8.12 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `Lego male face, Lego style, embarrassed expression, plain white background` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](cucucu666/ganga-8.12/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('cucucu666/ganga-8.12', weight_name='pytorch_lora_weights.safetensors') image = pipeline('Lego male face, Lego style, embarrassed expression, plain white background').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gayatridt/llama32-dpo-iterative-2
gayatridt
2025-08-12T05:07:16Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T05:07:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gayatridt/llama32-dpo-iterative-1
gayatridt
2025-08-12T05:03:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T05:02:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754974887
ggozzy
2025-08-12T05:02:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T05:02:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hanlforever/distilbert-base-uncased-finetuned-emotion
hanlforever
2025-08-12T04:59:48Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-12T04:09:35Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1596 - Accuracy: 0.938 - F1: 0.9381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1719 | 1.0 | 250 | 0.1708 | 0.9315 | 0.9317 | | 0.1114 | 2.0 | 500 | 0.1596 | 0.938 | 0.9381 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.5.1+cu118 - Datasets 4.0.0 - Tokenizers 0.21.4
infospot/infospot
infospot
2025-08-12T04:58:58Z
0
0
null
[ "region:us" ]
null
2025-08-12T04:58:34Z
Info-Spot lets you find and download restaurant menus instantly in a simple, easy-to-read PDF format that’s always up to date. Website: https://info-spot.com/ Social Media: - https://www.facebook.com/infospotcom/ - https://www.linkedin.com/company/info-spot-com/about/ - https://www.youtube.com/@SpotInfo-com - https://x.com/infospotcom - https://www.pinterest.com/infospotcom/
koloni/blockassist-bc-deadly_graceful_stingray_1754973170
koloni
2025-08-12T04:57:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:57:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-slithering_stalking_otter_1754972965
motza0025
2025-08-12T04:56:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slithering stalking otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:56:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slithering stalking otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Perf89/blockassist-bc-sleek_opaque_snail_1754972811
Perf89
2025-08-12T04:55:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sleek opaque snail", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:55:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sleek opaque snail --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cme81m942009yrts8lhmqopbk
BootesVoid
2025-08-12T04:52:18Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T04:52:17Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: GRMNGRL --- # Cm8Tb7Xkk0000Wzj24Pkk2M5G_Cme81M942009Yrts8Lhmqopbk <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `GRMNGRL` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "GRMNGRL", "lora_weights": "https://huggingface.co/BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cme81m942009yrts8lhmqopbk/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cme81m942009yrts8lhmqopbk', weight_name='lora.safetensors') image = pipeline('GRMNGRL').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cme81m942009yrts8lhmqopbk/discussions) to add images that show off what you’ve made with this LoRA.
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754974220
IvanJAjebu
2025-08-12T04:51:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:51:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
carolinechu/unsloth_model_8bit
carolinechu
2025-08-12T04:50:58Z
539
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-08T00:57:52Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** carolinechu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NexVeridian/Qwen3-4B-Instruct-2507-5bit
NexVeridian
2025-08-12T04:49:47Z
5
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "5-bit", "region:us" ]
text-generation
2025-08-06T17:40:26Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-4B-Instruct-2507 tags: - mlx --- # NexVeridian/Qwen3-4B-Instruct-2507-5bit This model [NexVeridian/Qwen3-4B-Instruct-2507-5bit](https://huggingface.co/NexVeridian/Qwen3-4B-Instruct-2507-5bit) was converted to MLX format from [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("NexVeridian/Qwen3-4B-Instruct-2507-5bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
RMCian/blockassist-bc-wiry_sturdy_cobra_1754974091
RMCian
2025-08-12T04:48:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:48:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754973972
ggozzy
2025-08-12T04:47:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:47:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754973946
IvanJAjebu
2025-08-12T04:46:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:46:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RMCian/blockassist-bc-wiry_sturdy_cobra_1754973936
RMCian
2025-08-12T04:46:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:45:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nerva1228/kuafeng
Nerva1228
2025-08-12T04:45:56Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T04:10:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: kuafeng --- # Kuafeng <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `kuafeng` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "kuafeng", "lora_weights": "https://huggingface.co/Nerva1228/kuafeng/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/kuafeng', weight_name='lora.safetensors') image = pipeline('kuafeng').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 5e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/kuafeng/discussions) to add images that show off what you’ve made with this LoRA.
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754973791
afasdfdfadsf
2025-08-12T04:44:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:43:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Luth-0.6B-Instruct-GGUF
mradermacher
2025-08-12T04:40:22Z
0
0
transformers
[ "transformers", "gguf", "fr", "en", "dataset:kurakurai/luth-sft", "base_model:kurakurai/Luth-0.6B-Instruct", "base_model:quantized:kurakurai/Luth-0.6B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T01:03:54Z
--- base_model: kurakurai/Luth-0.6B-Instruct datasets: - kurakurai/luth-sft language: - fr - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/kurakurai/Luth-0.6B-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Luth-0.6B-Instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Luth-0.6B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Luth-0.6B-Instruct-GGUF/resolve/main/Luth-0.6B-Instruct.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
deanb258/segformer-b5-fine-tuned-test
deanb258
2025-08-12T04:40:06Z
0
0
transformers
[ "transformers", "safetensors", "segformer", "vision", "image_segmentation", "generated_from_trainer", "base_model:nvidia/segformer-b2-finetuned-ade-512-512", "base_model:finetune:nvidia/segformer-b2-finetuned-ade-512-512", "license:other", "endpoints_compatible", "region:us" ]
null
2025-08-12T04:39:30Z
--- library_name: transformers license: other base_model: nvidia/segformer-b2-finetuned-ade-512-512 tags: - vision - image_segmentation - generated_from_trainer model-index: - name: segformer-b5-fine-tuned-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b5-fine-tuned-test This model is a fine-tuned version of [nvidia/segformer-b2-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512) on the deanb258/dataset_latest_full dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 200 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.52.1 - Pytorch 2.6.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.1
megumiin/blockassist-bc-colorful_swift_beaver_1754973480
megumiin
2025-08-12T04:39:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful swift beaver", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:39:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful swift beaver --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama8b-er-afg-v88-seed2-hx
giovannidemuri
2025-08-12T04:39:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T02:39:16Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v88-seed2-hx results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v88-seed2-hx This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.2
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754973435
afasdfdfadsf
2025-08-12T04:38:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:38:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RMCian/blockassist-bc-wiry_sturdy_cobra_1754973466
RMCian
2025-08-12T04:38:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:38:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754973362
ggozzy
2025-08-12T04:37:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:37:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ecamli/blockassist-bc-hulking_soft_hippo_1754973272
ecamli
2025-08-12T04:35:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:35:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
relapseone/blockassist-bc-insectivorous_prickly_shrew_1754971266
relapseone
2025-08-12T04:35:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous prickly shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:35:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous prickly shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754973056
ggozzy
2025-08-12T04:32:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:32:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bambangbukan/blockassist-bc-singing_burrowing_chicken_1754972917
bambangbukan
2025-08-12T04:30:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing burrowing chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:29:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing burrowing chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1754971394
koloni
2025-08-12T04:29:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:29:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754972888
IvanJAjebu
2025-08-12T04:29:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:29:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
wanpance/blockassist-bc-scavenging_invisible_prawn_1754972790
wanpance
2025-08-12T04:28:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scavenging invisible prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:27:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scavenging invisible prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754972721
afasdfdfadsf
2025-08-12T04:27:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:26:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gsaltintas/gsa-supertoken-gpt-4o
gsaltintas
2025-08-12T04:24:47Z
0
0
transformers
[ "transformers", "safetensors", "llama", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-12T03:48:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bustamiyusoef/TrOCR_JHR_few_shot
bustamiyusoef
2025-08-12T04:20:53Z
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-to-text
2025-08-12T04:16:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PwC-KR-GenAI/SamilPwC_AX_Node_GenAI_Team_expr
PwC-KR-GenAI
2025-08-12T04:18:52Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-12T04:18:52Z
--- license: apache-2.0 ---
PJMixers-Images/lightx2v_Qwen-Image-Lightning-4step-8step-Merge
PJMixers-Images
2025-08-12T04:17:33Z
0
1
diffusers
[ "diffusers", "Qwen-Image", "distillation", "LoRA", "merge", "text-to-image", "en", "zh", "base_model:Qwen/Qwen-Image", "base_model:finetune:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-12T01:30:28Z
--- license: apache-2.0 base_model: Qwen/Qwen-Image language: - en - zh pipeline_tag: text-to-image library_name: diffusers widget: - text: "A close-up portrait of a dog with black, brown, and white fur, a white stripe on its forehead, and brown and black markings on its ears, is looking directly at the camera with a serious expression. The dog has brown eyes with black pupils and a black nose, and its ears are large and pointed. The background is blurred and appears to be an outdoor setting with green and brown grass and a light grey sky." output: url: examples/Qwen-Image_00133_.png - text: "Close-up food photo of a hybrid snail composed entirely of glossy sticky cinnamon buns. The shell is made from a puffy perfectly swirled cinnamon bun covered in a thick glossy white glaze. Baked edges with a jagged cinnamon bun texture slightly caramelized, dark cinnamon filling inside, rich golden brown color. The glaze drips down in thick sweet drops, the snail tendrils are made of twisted cinnamon dough, glistening with icing sugar, the glaze reflects warm, natural light. The scene is shot in a soft, fuzzy kitchen setting, with a hint of freshly baked pastries in the background." output: url: examples/Qwen-Image_00134_.png - text: "8-bit pixel art of a pidgeon wearing a lab coat, and a tie. The background is large computer server room. The lighting is dark, with most light hitting the servers and not the pidgeon." output: url: examples/Qwen-Image_00136_.png - text: "A long tunnel with a high ceiling is seen dimly lit, illuminated by a single fluorescent light fixture at the end of the tunnel. The tunnel walls are made of corrugated metal and are lined with copper pipes. On the left wall, there is a yellow warning sign with a black exclamation mark and the text \"WARNING - MILITARY TESTING\" in black letters. To the right of the warning sign, on the right wall, is a green control panel with various knobs and switches, and a black and yellow warning tape is attached to the control panel. The floor is dark and wet, reflecting the light from the fluorescent light. A metal grate is visible on the floor." output: url: examples/Qwen-Image_00137_.png tags: - Qwen-Image - distillation - LoRA - merge --- # 50/50 merge of the 4-step and 8-step LoRA <Gallery /> ## My recommended settings - LoRA Strength: 0.9 (or possibly even lower) - Steps: 16 - Sampler: DEIS - Scheduler: KL Optimal - Shift: None (I removed the node, since it made no difference after I swapped to KL Optimal scheduler.) ## Reason for making The 4-step LoRA does fairly well at 4 steps, but it cannot go higher than 4 steps without overcooking the image, and even at 4 steps the image feels a little cooked. <p align="center"> <img src="examples/comparisons/Comparison_00001_.png" height="480px"/> <span style="font-size: small;">4-step lora | 4 steps vs. 8 steps vs. 16 steps | [1536x1536, no shift, lora strength 1, deis, kl_optimal, seed 187]</span> </p> The 8-step LoRA on the other hand is very undercooked at 4 steps, still a little undercooked at 8 steps, but handles higher step counts like 16 really well, but feels a little undercooked overall. <p align="center"> <img src="examples/comparisons/Comparison_00002_.png" height="480px"/> <span style="font-size: small;">8-step lora | 4 steps vs. 8 steps vs. 16 steps | [1536x1536, no shift, lora strength 1, deis, kl_optimal, seed 187]</span> </p> Merging these two together results in being able to do 16 without overcooking or undercooking. It feels *just about right*, especially if you load at 90% strength. <p align="center"> <img src="examples/comparisons/Comparison_00005_.png" height="480px"/> <span style="font-size: small;">merged lora | 4 steps vs. 8 steps vs. 16 steps | [1536x1536, no shift, lora strength 1, deis, kl_optimal, seed 187]</span> </p>
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754972145
afasdfdfadsf
2025-08-12T04:17:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:16:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tamewild/4b_v46_merged_e8
tamewild
2025-08-12T04:16:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T04:13:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SmokeST/lettascar2
SmokeST
2025-08-12T04:13:02Z
0
0
null
[ "lora", "flux", "stable-diffusion", "text-to-image", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-08-12T04:06:57Z
--- license: creativeml-openrail-m pipeline_tag: text-to-image base_model: runwayml/stable-diffusion-v1-5 tags: - lora - flux - stable-diffusion ---
hafidhsoekma/test-g1.7b-2-checkpoint-1000
hafidhsoekma
2025-08-12T04:12:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T04:05:58Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hafidhsoekma - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bboeun/food-finetuned2-model
bboeun
2025-08-12T04:12:31Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T10:36:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754971860
ggozzy
2025-08-12T04:12:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:11:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hobson123/blockassist-bc-mammalian_dense_gibbon_1754971490
hobson123
2025-08-12T04:10:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:10:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754971668
ggozzy
2025-08-12T04:09:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:08:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754971625
afasdfdfadsf
2025-08-12T04:08:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:07:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
flyingbugs/Qwen2.5-Math-7B-limo-32b
flyingbugs
2025-08-12T04:07:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:flyingbugs/limo-deepseek32b-responses", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T03:26:29Z
--- base_model: Qwen/Qwen2.5-Math-7B-Instruct datasets: flyingbugs/limo-deepseek32b-responses library_name: transformers model_name: Qwen2.5-Math-7B-limo-32b tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-Math-7B-limo-32b This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [flyingbugs/limo-deepseek32b-responses](https://huggingface.co/datasets/flyingbugs/limo-deepseek32b-responses) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="flyingbugs/Qwen2.5-Math-7B-limo-32b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/krfigq0z) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Jusstin/blockassist-bc-omnivorous_polished_mule_1754971521
Jusstin
2025-08-12T04:06:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "omnivorous polished mule", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:05:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - omnivorous polished mule --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn
BootesVoid
2025-08-12T04:03:53Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T04:03:50Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cme7Yi48E001Trts8O87Yxrtt_Cme7Yudrm002Wrts86Fdjz5Hn <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF
mradermacher
2025-08-12T04:00:05Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "dpo", "en", "base_model:AmberYifan/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k", "base_model:quantized:AmberYifan/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T01:17:15Z
--- base_model: AmberYifan/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k language: - en library_name: transformers model_name: Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AmberYifan/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-7B-Instruct-wildfeedback-iterDPO-iter2-4k.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF
Jeol
2025-08-12T03:56:22Z
0
0
transformers
[ "transformers", "gguf", "vllm", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Jinx-org/Jinx-gpt-oss-20b", "base_model:quantized:Jinx-org/Jinx-gpt-oss-20b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T03:55:13Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation base_model: Jinx-org/Jinx-gpt-oss-20b tags: - vllm - llama-cpp - gguf-my-repo --- # Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF This model was converted to GGUF format from [`Jinx-org/Jinx-gpt-oss-20b`](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -c 2048 ```
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754970849
afasdfdfadsf
2025-08-12T03:55:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:54:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AshwinKM2005/Hangman_TrexQuant
AshwinKM2005
2025-08-12T03:53:05Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T03:51:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bimobbb/blockassist-bc-energetic_lanky_frog_1754970425
bimobbb
2025-08-12T03:53:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "energetic lanky frog", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:51:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - energetic lanky frog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cuongdk253/gpt-oss-fine-tune-10082025
cuongdk253
2025-08-12T03:52:00Z
0
0
transformers
[ "transformers", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "mxfp4", "region:us" ]
text-generation
2025-08-10T12:06:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jusstin/blockassist-bc-omnivorous_polished_mule_1754970663
Jusstin
2025-08-12T03:51:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "omnivorous polished mule", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:51:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - omnivorous polished mule --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754970549
afasdfdfadsf
2025-08-12T03:50:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:49:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754970471
IvanJAjebu
2025-08-12T03:48:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:48:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Dante-7B-GGUF
mradermacher
2025-08-12T03:46:18Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:outflanknl/Dante-7B", "base_model:quantized:outflanknl/Dante-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T02:55:38Z
--- base_model: outflanknl/Dante-7B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/outflanknl/Dante-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Dante-7B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Dante-7B-GGUF/resolve/main/Dante-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
outlookAi/OLcGoQXwmy
outlookAi
2025-08-12T03:44:01Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T03:26:19Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Mauy2 --- # Olcgoqxwmy <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Mauy2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Mauy2", "lora_weights": "https://huggingface.co/outlookAi/OLcGoQXwmy/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/OLcGoQXwmy', weight_name='lora.safetensors') image = pipeline('Mauy2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1200 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/outlookAi/OLcGoQXwmy/discussions) to add images that show off what you’ve made with this LoRA.
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754970142
IvanJAjebu
2025-08-12T03:43:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:43:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bambangbukan/blockassist-bc-singing_burrowing_chicken_1754969968
bambangbukan
2025-08-12T03:41:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing burrowing chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:40:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing burrowing chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754969915
afasdfdfadsf
2025-08-12T03:40:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:39:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Hfkjc/blockassist-bc-fanged_stinging_sandpiper_1754969505
Hfkjc
2025-08-12T03:38:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fanged stinging sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:38:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fanged stinging sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
YG0628/CVE-CWE-CAPEC-Mapping-Model
YG0628
2025-08-12T03:37:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T09:14:22Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** YG0628 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
John6666/noobai-v-pred-10-with-eq-vae-experimental-eq-vae-sdxl
John6666
2025-08-12T03:37:08Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "less noisy", "cleaner colors", "finetune", "EQVAE", "v-pred", "merge", "noobai", "illustrious", "en", "base_model:Anzhc/MS-LC-EQ-D-VR_VAE", "base_model:merge:Anzhc/MS-LC-EQ-D-VR_VAE", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:merge:Laxhar/noobai-XL-Vpred-1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-12T03:30:32Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - less noisy - cleaner colors - finetune - EQVAE - v-pred - merge - noobai - illustrious base_model: - Laxhar/noobai-XL-Vpred-1.0 - Anzhc/MS-LC-EQ-D-VR_VAE --- Original model is [here](https://civitai.com/models/1858821/noobai-v-pred-10-with-eq-vae?modelVersionId=2103794). The author is [here](https://huggingface.co/Bluvoll). This model created by [bluvoll](https://civitai.com/user/bluvoll).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754969729
IvanJAjebu
2025-08-12T03:36:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:36:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754968608
Sayemahsjn
2025-08-12T03:35:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:35:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
apriasmoro/8a3dc043-6cc3-4349-b521-2e4e76a022c8
apriasmoro
2025-08-12T03:33:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "grpo", "trl", "axolotl", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T03:33:16Z
--- library_name: transformers model_name: 8a3dc043-6cc3-4349-b521-2e4e76a022c8 tags: - generated_from_trainer - grpo - trl - axolotl licence: license --- # Model Card for 8a3dc043-6cc3-4349-b521-2e4e76a022c8 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754969461
afasdfdfadsf
2025-08-12T03:32:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:31:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754969338
IvanJAjebu
2025-08-12T03:30:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:30:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-silent_peaceful_alpaca_1754967982
motza0025
2025-08-12T03:29:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silent peaceful alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:29:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silent peaceful alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bobchenyx/Kimi-K2-Instruct-GGUF
bobchenyx
2025-08-12T03:29:53Z
628
1
null
[ "gguf", "text-generation", "base_model:moonshotai/Kimi-K2-Instruct", "base_model:quantized:moonshotai/Kimi-K2-Instruct", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-07-29T16:57:29Z
--- quantized_by: bobchenyx license: mit base_model: - moonshotai/Kimi-K2-Instruct pipeline_tag: text-generation base_model_relation: quantized --- ## Llamacpp Quantizations of Kimi-K2-Instruct Original model: [moonshotai/Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct). All quants made based on [bartowski1182-llama.cpp](https://github.com/bartowski1182/llama.cpp). All quants using imatrix & BF16 convertion from [unsloth/Kimi-K2-Instruct-GGUF/BF16](https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF/tree/main/BF16). **IQ1_S : 197.39 GiB (1.65 BPW)** **IQ1_M : 206.03 GiB (1.72 BPW)** **IQ2_S : 265.71 GiB (2.22 BPW)** **Q2_K : 335.39 GiB (2.81 BPW)** --- ## Download(Example) ``` # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "bobchenyx/Kimi-K2-Instruct-GGUF", local_dir = "bobchenyx/Kimi-K2-Instruct-GGUF", allow_patterns = ["*IQ1_M*"], ) ```
michaelwaves/gptoss20b-production-sabotage
michaelwaves
2025-08-12T03:29:32Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:openai/gpt-oss-20b", "lora", "sft", "transformers", "trl", "base_model:openai/gpt-oss-20b", "region:us" ]
null
2025-08-12T03:29:12Z
--- base_model: openai/gpt-oss-20b library_name: peft model_name: output_2 tags: - base_model:adapter:openai/gpt-oss-20b - lora - sft - transformers - trl licence: license --- # Model Card for output_2 This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - PEFT 0.17.0 - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FlagRelease/Qwen3-4B-hygon-FlagOS
FlagRelease
2025-08-12T03:25:21Z
0
0
null
[ "safetensors", "qwen3", "region:us" ]
null
2025-08-11T06:43:41Z
# Introduction **FlagOS** is a unified heterogeneous computing software stack for large models, co-developed with leading global chip manufacturers. With core technologies such as the **FlagScale** distributed training/inference framework, **FlagGems** universal operator library, **FlagCX** communication library, and **FlagTree** unified compiler, the **FlagRelease** platform leverages the FlagOS stack to automatically produce and release various combinations of <chip + open-source model>. This enables efficient and automated model migration across diverse chips, opening a new chapter for large model deployment and application. Based on this, the **Qwen3-4B-hygon-FlagOS** model is adapted for the Hygon chip using the FlagOS software stack, enabling: ### Integrated Deployment - Deep integration with the open-source [FlagScale framework](https://github.com/FlagOpen/FlagScale) - Out-of-the-box inference scripts with pre-configured hardware and software parameters - Released **FlagOS** container image supporting deployment within minutes ### Consistency Validation - Rigorously evaluated through benchmark testing: Performance and results from the FlagOS software stack are compared against native stacks on multiple public. # Technical Overview ## **FlagScale Distributed Training and Inference Framework** FlagScale is an end-to-end framework for large models across heterogeneous computing resources, maximizing computational efficiency and ensuring model validity through core technologies. Its key advantages include: - **Unified Deployment Interface:** Standardized command-line tools support one-click service deployment across multiple hardware platforms, significantly reducing adaptation costs in heterogeneous environments. - **Intelligent Parallel Optimization:** Automatically generates optimal distributed parallel strategies based on chip computing characteristics, achieving dynamic load balancing of computation/communication resources. - **Seamless Operator Switching:** Deep integration with the FlagGems operator library allows high-performance operators to be invoked via environment variables without modifying model code. ## **FlagGems Universal Large-Model Operator Library** FlagGems is a Triton-based, cross-architecture operator library collaboratively developed with industry partners. Its core strengths include: - **Full-stack Coverage**: Over 100 operators, with a broader range of operator types than competing libraries. - **Ecosystem Compatibility**: Supports 7 accelerator backends. Ongoing optimizations have significantly improved performance. - **High Efficiency**: Employs unique code generation and runtime optimization techniques for faster secondary development and better runtime performance compared to alternatives. ## **FlagEval Evaluation Framework** FlagEval (Libra)** is a comprehensive evaluation system and open platform for large models launched in 2023. It aims to establish scientific, fair, and open benchmarks, methodologies, and tools to help researchers assess model and training algorithm performance. It features: - **Multi-dimensional Evaluation**: Supports 800+ model evaluations across NLP, CV, Audio, and Multimodal fields, covering 20+ downstream tasks including language understanding and image-text generation. - **Industry-Grade Use Cases**: Has completed horizontal evaluations of mainstream large models, providing authoritative benchmarks for chip-model performance validation. # Evaluation Results ## Benchmark Result | Metrics | Qwen3-4B-H100-CUDA | Qwen3-4B-hygon-FlagOS | | --------- | ------------------ | ---------------------- | | liveBench-0shot@avg1 | 0.501 | 0.496 | | AIME-0shot@avg1 | 0.700 | 0.667 | | MMLU-5shots@avg1 | 0.669 | 0.671 | | MUSR-0shot@avg1 | 0.590 | 0.593 | | GPQA-0shot@avg1 | 0.410 | 0.430 | # User Guide **Environment Setup** | Accelerator Card Driver Version | Kernel Mode Driver Version: 2.3.0 | | ------------- | ------------------------------------------------------------ | | Docker Version | Docker version 24.0.6, build ed223bc | | Operating System | Ubuntu 22.04.4 LTS | | FlagScale | Version: 0.8.0 | | FlagGems | Version: 3.0 | ## Operation Steps ### Download Open-source Model Weights ```bash pip install modelscope modelscope download --model Qwen/Qwen3-4B --local_dir /share/Qwen3-4B ``` ### Download FlagOS Image BE AWARE!, Hygon's FLAGOS image have not decided public-accesible through internet or not. To obtain this image, you can contact us or hygon through issues. ```bash docker pull harbor.baai.ac.cn/flagrelease-inner/flagrelease_hygon_qwen3 ``` ### Start the inference service ```bash #Container Startup docker run -it \ --name=flagos \ --network=host \ --privileged \ --ipc=host \ --shm-size=16G \ --memory="512g" \ --ulimit stack=-1:-1 \ --ulimit memlock=-1:-1 \ --cap-add=SYS_PTRACE \ --security-opt seccomp=unconfined \ --device=/dev/kfd \ --device=/dev/dri \ --group-add video \ -u root \ -v /opt/hyhal:/opt/hyhal \ -v /share:/share \ harbor.baai.ac.cn/flagrelease-inner/flagrelease_hygon_qwen3 \ /bin/bash ``` ### Serve ```bash flagscale serve qwen3 ``` ## Service Invocation ### API-based Invocation Script ```bash import openai openai.api_key = "EMPTY" openai.base_url = "http://<server_ip>:9010/v1/" model = "Qwen3-4B-hygon-flagos" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What's the weather like today?"} ] response = openai.chat.completions.create( model=model, messages=messages, temperature=0.7, top_p=0.95, stream=False, ) for item in response: print(item) ``` ### AnythingLLM Integration Guide #### 1. Download & Install - Visit the official site: https://anythingllm.com/ - Choose the appropriate version for your OS (Windows/macOS/Linux) - Follow the installation wizard to complete the setup #### 2. Configuration - Launch AnythingLLM - Open settings (bottom left, fourth tab) - Configure core LLM parameters - Click "Save Settings" to apply changes #### 3. Model Interaction - After model loading is complete: - Click **"New Conversation"** - Enter your question (e.g., “Explain the basics of quantum computing”) - Click the send button to get a response # Contributing We warmly welcome global developers to join us: 1. Submit Issues to report problems 2. Create Pull Requests to contribute code 3. Improve technical documentation 4. Expand hardware adaptation support # License 本模型的权重来源于Qwen/Qwen3-4B,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754968906
IvanJAjebu
2025-08-12T03:23:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:22:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Obiwank107/blockassist-bc-tame_foxy_aardvark_1754965474
Obiwank107
2025-08-12T03:18:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tame foxy aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:18:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tame foxy aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1754966907
calegpedia
2025-08-12T03:14:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:14:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754968190
IvanJAjebu
2025-08-12T03:11:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:10:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754968173
fatepurriyaz
2025-08-12T03:10:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic pawing pig", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:10:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic pawing pig --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tuantranmlv/contractbert_dichvu_nghhiemthudichvu
tuantranmlv
2025-08-12T03:09:58Z
3
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-11T02:55:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0xGareeb/blockassist-bc-nimble_shaggy_zebra_1754968014
0xGareeb
2025-08-12T03:09:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nimble shaggy zebra", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:08:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nimble shaggy zebra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754967991
afasdfdfadsf
2025-08-12T03:08:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:07:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jahyungu/Falcon3-7B-Instruct_TACO
jahyungu
2025-08-12T03:07:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "dataset:taco", "base_model:tiiuae/Falcon3-7B-Instruct", "base_model:finetune:tiiuae/Falcon3-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T05:01:53Z
--- library_name: transformers license: other base_model: tiiuae/Falcon3-7B-Instruct tags: - generated_from_trainer datasets: - taco model-index: - name: Falcon3-7B-Instruct_TACO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Falcon3-7B-Instruct_TACO This model is a fine-tuned version of [tiiuae/Falcon3-7B-Instruct](https://huggingface.co/tiiuae/Falcon3-7B-Instruct) on the taco dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
Vanbitcase/qwen-7b-124r-adapter
Vanbitcase
2025-08-12T03:06:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T03:02:23Z
--- base_model: unsloth/qwen2-vl-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Vanbitcase - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-bnb-4bit This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754966818
Sayemahsjn
2025-08-12T03:05:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:05:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akhyar919/model-name
akhyar919
2025-08-12T03:02:53Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T03:02:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MMS-VIDEOS-18-tau-viral-video-Clip/New.full.videos.tau.Viral.Video.Official.Tutorial
MMS-VIDEOS-18-tau-viral-video-Clip
2025-08-12T03:00:40Z
0
0
null
[ "region:us" ]
null
2025-08-12T03:00:23Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
hobson123/blockassist-bc-mammalian_dense_gibbon_1754967277
hobson123
2025-08-12T03:00:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:00:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
InfiX-ai/InfiGUI-G1-7B
InfiX-ai
2025-08-12T02:53:07Z
3
3
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "gui", "agent", "gui-grounding", "reinforcement-learning", "image-text-to-text", "conversational", "en", "arxiv:2508.05731", "arxiv:2504.14239", "arxiv:2501.04575", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-07T19:35:46Z
--- base_model: - Qwen/Qwen2.5-VL-7B-Instruct language: - en library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - gui - agent - gui-grounding - reinforcement-learning --- # InfiGUI-G1-7B This repository contains the InfiGUI-G1-7B model from the paper **[InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization](https://arxiv.org/abs/2508.05731)**. <p align="left"> <a href="https://arxiv.org/abs/2508.05731"><img src="https://img.shields.io/badge/arXiv-Preprint-b31b1b?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a> <a href="https://huggingface.co/papers/2508.05731"><img src="https://img.shields.io/badge/HuggingFace-Daily%20Papers-ff9800?style=flat&logo=huggingface" alt="Hugging Face Paper"></a> <a href="https://huggingface.co/InfiX-ai/InfiGUI-G1-3B"><img src="https://img.shields.io/badge/Model-InfiGUI--G1--3B-007ec6?style=flat&logo=huggingface" alt="InfiGUI-G1 3B Model"></a> <a href="https://github.com/InfiXAI/InfiGUI-G1"><img src="https://img.shields.io/badge/GitHub-Repo-181717?style=flat&logo=github&logoColor=white" alt="GitHub Repo"></a> </p> ## Model Description The model is based on `Qwen2.5-VL-7B-Instruct` and is fine-tuned using our proposed **Adaptive Exploration Policy Optimization (AEPO)** framework. AEPO is a novel reinforcement learning method designed to enhance the model's **semantic alignment** for GUI grounding tasks. It overcomes the exploration bottlenecks of standard RLVR methods by integrating a multi-answer generation strategy with a theoretically-grounded adaptive reward function, enabling more effective and efficient learning for complex GUI interactions. ## Paper Overview A fundamental challenge for GUI agents is robustly grounding natural language instructions, which requires not only precise **spatial alignment** (locating elements accurately) but also correct **semantic alignment** (identifying the functionally appropriate element). While existing Reinforcement Learning with Verifiable Rewards (RLVR) methods have enhanced spatial precision, they often suffer from inefficient exploration. This "confidence trap" bottlenecks semantic alignment, preventing models from discovering correct actions for difficult semantic associations. To address this critical exploration problem, we introduce **InfiGUI-G1**, a series of models trained with **Adaptive Exploration Policy Optimization (AEPO)**. AEPO overcomes the exploration bottleneck by integrating a **multi-answer generation** strategy to explore a diverse set of candidate actions in a single forward pass. This exploration is guided by a theoretically-grounded **Adaptive Exploration Reward (AER)** function, derived from first principles of efficiency (η=U/C), which provides rich, informative learning signals to dynamically balance exploration and exploitation. ## Quick Start ### Installation First, install the required dependencies: ```bash pip install transformers qwen-vl-utils ```` ### Example ```python import json import math import torch import requests from io import BytesIO from PIL import Image, ImageDraw, ImageFont from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info, smart_resize MAX_IMAGE_PIXELS = 5600 * 28 * 28 def resize_image(width: int, height: int, max_pixels: int) -> tuple[int, int]: """ Resize image to fit within max_pixels constraint while maintaining aspect ratio. Applies smart_resize for final dimension optimization. """ current_pixels = width * height if current_pixels <= max_pixels: target_width, target_height = width, height else: scale_factor = math.sqrt(max_pixels / current_pixels) target_width = round(width * scale_factor) target_height = round(height * scale_factor) # Apply smart_resize for final dimensions final_height, final_width = smart_resize(target_height, target_width) return final_width, final_height def load_image(img_path: str) -> Image.Image: """Load image from URL or local path.""" if img_path.startswith("https://"): response = requests.get(img_path) return Image.open(BytesIO(response.content)) else: return Image.open(img_path) def visualize_points(original_image: Image.Image, points: list, new_width: int, new_height: int, original_width: int, original_height: int) -> None: """Draw prediction points on original image and save as output.png.""" output_img = original_image.copy() draw = ImageDraw.Draw(output_img) font = ImageFont.load_default(size=100) for i, point_data in enumerate(points): coords = point_data['point_2d'] # Map coordinates from resized image back to original image original_x = int(coords[0] / new_width * original_width) original_y = int(coords[1] / new_height * original_height) label = str(i + 1) # Draw circle circle_radius = 20 draw.ellipse([original_x - circle_radius, original_y - circle_radius, original_x + circle_radius, original_y + circle_radius], fill=(255, 0, 0)) # Draw label draw.text((original_x + 20, original_y - 20), label, fill=(255, 0, 0), font=font) print(f"Point {i+1}: Predicted coordinates {coords} -> Mapped coordinates [{original_x}, {original_y}]") output_img.save("output.png") print(f"Visualization with {len(points)} points saved to output.png") def main(): # Load model and processor model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "InfiX-ai/InfiGUI-G1-7B", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto" ) processor = AutoProcessor.from_pretrained("InfiX-ai/InfiGUI-G1-7B", padding_side="left") # Load and process image img_path = "https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/test_image.png" image = load_image(img_path) # Store original image and resize for model input original_image = image.copy() original_width, original_height = image.size new_width, new_height = resize_image(original_width, original_height, MAX_IMAGE_PIXELS) resized_image = image.resize((new_width, new_height)) # Prepare model inputs instruction = "shuffle play the current playlist" system_prompt = 'You FIRST think about the reasoning process as an internal monologue and then provide the final answer.\nThe reasoning process MUST BE enclosed within <think> </think> tags.' prompt = f'''The screen's resolution is {new_width}x{new_height}. Locate the UI element(s) for "{instruction}", output the coordinates using JSON format: [{{"point_2d": [x, y]}}, ...]''' messages = [ {"role": "system", "content": system_prompt}, { "role": "user", "content": [ {"type": "image", "image": resized_image}, {"type": "text", "text": prompt} ] } ] # Generate predictions text = processor.apply_chat_template([messages], tokenize=False, add_generation_prompt=True) image_inputs, video_inputs = process_vision_info([messages]) inputs = processor(text=text, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=512) output_text = processor.batch_decode( [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)], skip_special_tokens=True, clean_up_tokenization_spaces=False ) # Parse and visualize results output_text = output_text[0].split("</think>")[-1].replace("```json", "").replace("```", "").strip() output = json.loads(output_text) if output: visualize_points(original_image, output, new_width, new_height, original_width, original_height) if __name__ == "__main__": main() ``` ## Results Our InfiGUI-G1 models, trained with the AEPO framework, establish new state-of-the-art results among open-source models across a diverse and challenging set of GUI grounding benchmarks: <div align="left"> <table style="width: 100%; max-width: 750px; border-collapse: collapse; border-top: 2px solid #212529; border-bottom: 2px solid #212529; font-family: sans-serif;"> <thead style="background-color: #f8f9fa;"> <tr style="border-bottom: 1.5px solid #212529;"> <th style="padding: 12px 10px; text-align: left; width: 24.9%; font-weight: 600; color: #343a40;">Model</th> <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">MMBench-GUI</th> <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">ScreenSpot-v2</th> <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">UI-Vision</th> <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">I2E-Bench</th> <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">ScreenSpot-Pro</th> </tr> </thead> <tbody> <tr> <td style="padding: 10px; text-align: left;">Qwen2.5-VL-7B</td> <td style="padding: 10px; text-align: center;">33.9</td> <td style="padding: 10px; text-align: center;">88.8</td> <td style="padding: 10px; text-align: center;">0.9</td> <td style="padding: 10px; text-align: center;">53.8</td> <td style="padding: 10px; text-align: center;">-</td> </tr> <tr> <td style="padding: 10px; text-align: left;">GUI-G²-7B</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;"><u>93.3</u></td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">47.5</td> </tr> <tr> <td style="padding: 10px; text-align: left;">UI-TARS-7B</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">91.6</td> <td style="padding: 10px; text-align: center;">17.6</td> <td style="padding: 10px; text-align: center;">61.4</td> <td style="padding: 10px; text-align: center;">35.7</td> </tr> <tr> <td style="padding: 10px; text-align: left;">UGround-v1-7B</td> <td style="padding: 10px; text-align: center;">65.7</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">12.9</td> <td style="padding: 10px; text-align: center;">70.3</td> <td style="padding: 10px; text-align: center;">-</td> </tr> <tr> <td style="padding: 10px; text-align: left;">UI-TARS-1.5-7B</td> <td style="padding: 10px; text-align: center;">64.3</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">73.2</td> <td style="padding: 10px; text-align: center;"><u>49.6</u></td> </tr> <tr> <td style="padding: 10px; text-align: left;">Qwen2.5-VL-72B</td> <td style="padding: 10px; text-align: center;">41.8</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">51.4</td> <td style="padding: 10px; text-align: center;">-</td> </tr> <tr> <td style="padding: 10px; text-align: left;">UGround-v1-72B</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">-</td> <td style="padding: 10px; text-align: center;">23.2</td> <td style="padding: 10px; text-align: center;"><u>76.3</u></td> <td style="padding: 10px; text-align: center;">-</td> </tr> <tr> <td style="padding: 10px; text-align: left;">UI-TARS-72B</td> <td style="padding: 10px; text-align: center;"><u>74.3</u></td> <td style="padding: 10px; text-align: center;">90.3</td> <td style="padding: 10px; text-align: center;"><u>25.5</u></td> <td style="padding: 10px; text-align: center;">73.7</td> <td style="padding: 10px; text-align: center;">-</td> </tr> <tr> <th colspan="6" style="padding: 10px 12px; text-align: left; font-style: italic; background-color: #f8f9fa; border-top: 1px solid #dee2e6; border-bottom: 1px solid #dee2e6; color: #343a40;">Ours</th> </tr> <tr style="background-color: #f0f8ff;"> <td style="padding: 10px; text-align: left;"><b>InfiGUI-G1-7B</b></td> <td style="padding: 10px; text-align: center;"><b>80.8</b></td> <td style="padding: 10px; text-align: center;"><b>93.5</b></td> <td style="padding: 10px; text-align: center;"><b>26.1</b></td> <td style="padding: 10px; text-align: center;"><b>77.4</b></td> <td style="padding: 10px; text-align: center;"><b>51.9</b></td> </tr> <tr style="background-color: #f0f8ff;"> <td style="padding: 10px; text-align: right;"><i>w/ Expl. Success</i></td> <td style="padding: 10px; text-align: center;">86.4</td> <td style="padding: 10px; text-align: center;">95.6</td> <td style="padding: 10px; text-align: center;">34.4</td> <td style="padding: 10px; text-align: center;">83.0</td> <td style="padding: 10px; text-align: center;">58.0</td> </tr> </tbody> </table> </div> ## Evaluation This section provides instructions for reproducing the evaluation results reported in our paper. ### 1. Getting Started Clone the repository and navigate to the project directory: ```bash git clone https://github.com/InfiXAI/InfiGUI-G1.git cd InfiGUI-G1 ``` ### 2. Environment Setup The evaluation pipeline is built upon the [vLLM](https://github.com/vllm-project/vllm) library for efficient inference. For detailed installation guidance, please refer to the official vLLM repository. The specific versions used to obtain the results reported in our paper are as follows: - **Python**: `3.10.12` - **PyTorch**: `2.6.0` - **Transformers**: `4.50.1` - **vLLM**: `0.8.2` - **CUDA**: `12.6` The reported results were obtained on a server equipped with 4 x NVIDIA H800 GPUs. ### 3. Model Download Download the InfiGUI-G1 models from the Hugging Face Hub into the `./models` directory. ```bash # Create a directory for models mkdir -p ./models # Download InfiGUI-G1-3B huggingface-cli download --resume-download InfiX-ai/InfiGUI-G1-3B --local-dir ./models/InfiGUI-G1-3B # Download InfiGUI-G1-7B huggingface-cli download --resume-download InfiX-ai/InfiGUI-G1-7B --local-dir ./models/InfiGUI-G1-7B ``` ### 4. Dataset Download and Preparation Download the required evaluation benchmarks into the `./data` directory. ```bash # Create a directory for datasets mkdir -p ./data # Download benchmarks huggingface-cli download --repo-type dataset --resume-download likaixin/ScreenSpot-Pro --local-dir ./data/ScreenSpot-Pro huggingface-cli download --repo-type dataset --resume-download ServiceNow/ui-vision --local-dir ./data/ui-vision huggingface-cli download --repo-type dataset --resume-download OS-Copilot/ScreenSpot-v2 --local-dir ./data/ScreenSpot-v2 huggingface-cli download --repo-type dataset --resume-download OpenGVLab/MMBench-GUI --local-dir ./data/MMBench-GUI huggingface-cli download --repo-type dataset --resume-download vaundys/I2E-Bench --local-dir ./data/I2E-Bench ``` After downloading, some datasets require unzipping compressed image files. ```bash # Unzip images for ScreenSpot-v2 unzip ./data/ScreenSpot-v2/screenspotv2_image.zip -d ./data/ScreenSpot-v2/ # Unzip images for MMBench-GUI unzip ./data/MMBench-GUI/MMBench-GUI-OfflineImages.zip -d ./data/MMBench-GUI/ ``` ### 5. Running the Evaluation To run the evaluation, use the `eval/eval.py` script. You must specify the path to the model, the benchmark name, and the tensor parallel size. Here is an example command to evaluate the `InfiGUI-G1-3B` model on the `screenspot-pro` benchmark using 4 GPUs: ```bash python eval/eval.py \ ./models/InfiGUI-G1-3B \ --benchmark screenspot-pro \ --tensor-parallel 4 ``` - **`model_path`**: The first positional argument specifies the path to the downloaded model directory (e.g., `./models/InfiGUI-G1-3B`). - **`--benchmark`**: Specifies the benchmark to evaluate. Available options include `screenspot-pro`, `screenspot-v2`, `ui-vision`, `mmbench-gui`, and `i2e-bench`. - **`--tensor-parallel`**: Sets the tensor parallelism size, which should typically match the number of available GPUs. Evaluation results, including detailed logs and performance metrics, will be saved to the `./output/{model_name}/{benchmark}/` directory. ## Citation Information If you find this work useful, we would be grateful if you consider citing the following papers: ```bibtex @misc{liu2025infiguig1advancingguigrounding, title={InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization}, author={Yuhang Liu and Zeyu Liu and Shuanghe Zhu and Pengxiang Li and Congkai Xie and Jiasheng Wang and Xueyu Hu and Xiaotian Han and Jianbo Yuan and Xinyao Wang and Shengyu Zhang and Hongxia Yang and Fei Wu}, year={2025}, eprint={2508.05731}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2508.05731}, } ``` ```bibtex @article{liu2025infigui, title={InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners}, author={Liu, Yuhang and Li, Pengxiang and Xie, Congkai and Hu, Xavier and Han, Xiaotian and Zhang, Shengyu and Yang, Hongxia and Wu, Fei}, journal={arXiv preprint arXiv:2504.14239}, year={2025} } ``` ```bibtex @article{liu2025infiguiagent, title={InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection}, author={Liu, Yuhang and Li, Pengxiang and Wei, Zishu and Xie, Congkai and Hu, Xueyu and Xu, Xinchen and Zhang, Shengyu and Han, Xiaotian and Yang, Hongxia and Wu, Fei}, journal={arXiv preprint arXiv:2501.04575}, year={2025} } ``` ## Acknowledgements We would like to express our gratitude for the following open-source projects: [VERL](https://github.com/volcengine/verl), [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and [vLLM](https://github.com/vllm-project/vllm).
otmorozky/AceInstruct-1.5B-Gensyn-Swarm-lazy_sprightly_hippo
otmorozky
2025-08-12T02:52:00Z
99
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am lazy_sprightly_hippo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-08T15:06:10Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am lazy_sprightly_hippo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
perrx/8.8demo_9
perrx
2025-08-12T02:50:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T02:44:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roachkins/omega_6yKbJIe
roachkins
2025-08-12T02:50:21Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T02:50:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754966874
afasdfdfadsf
2025-08-12T02:49:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:48:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FluidInference/Qwen3-8B-int8-ov
FluidInference
2025-08-12T02:48:03Z
0
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "license:apache-2.0", "region:us" ]
null
2025-08-12T00:36:37Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE base_model: - Qwen/Qwen3-8B base_model_relation: quantized --- # Qwen3-8B-int8-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) ## Description This is [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "FluidInference/qwen3-8b-int8-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "FluidInference/qwen3-8b-int8-ov" model_path = "qwen3-8b-int8-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-8B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE) license. More details can be found in [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
koloni/blockassist-bc-deadly_graceful_stingray_1754965264
koloni
2025-08-12T02:47:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:47:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hobson123/blockassist-bc-mammalian_dense_gibbon_1754966510
hobson123
2025-08-12T02:47:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:47:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FluidInference/Qwen3-1.7B-fp16-ov
FluidInference
2025-08-12T02:46:29Z
0
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:apache-2.0", "region:us" ]
null
2025-08-11T22:45:33Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE base_model: - Qwen/Qwen3-1.7B --- # Qwen3-1.7B-fp16-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) ## Description This is [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to FP16. ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "FluidInference/qwen3-1.7b-fp16-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "FluidInference/qwen3-1.7b-fp16-ov" model_path = "qwen3-1.7b-fp16-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-1.7B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE) license. More details can be found in [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.