modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Shero448/LMB_style_lora
|
Shero448
| 2025-08-31T03:21:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"base_model:adapter:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"region:us"
] |
text-to-image
| 2025-08-31T03:20:54Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/_2025-01-10-205619_00000_.png
text: >-
(masterpiece, best quality:1.2), amazing quality, very aesthetic, 32k,
absurdres, extremely beautiful, newest, scenery, extra details, (sharp
detailed:1.2),
parameters:
negative_prompt: >-
eyewear_on_head ,(lowres, bad quality, low quality, worst quality:1.2),
worst detail, jpeg artifacts, cropped, resolution mismatch, resized, bad
source,
base_model: dhead/wai-nsfw-illustrious-sdxl-v140-sdxl
instance_prompt: lmb
---
# LMB_style_lora
<Gallery />
## Trigger words
You should use `lmb` to trigger the image generation.
## Download model
[Download](/Shero448/LMB_style_lora/tree/main) them in the Files & versions tab.
|
AlexSurya/birthday_wish_writer
|
AlexSurya
| 2025-08-31T03:20:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T02:49:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lamdo/casper
|
lamdo
| 2025-08-31T03:17:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-31T03:16:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arianaazarbal/standard_tpr_0.65-grpo_recontextualized_1_20250831_031427-policy-adapter
|
arianaazarbal
| 2025-08-31T03:15:48Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-31T03:14:44Z |
# Policy Model LoRA Adapter (GRPO/DPO)
Experiment: standard_tpr_0.65
Timestamp: grpo_recontextualized_1_20250831_031427
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Policy Model LoRA Adapter (GRPO/DPO)
- **Experiment Name**: standard_tpr_0.65
- **Training Timestamp**: grpo_recontextualized_1_20250831_031427
|
mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF
|
mradermacher
| 2025-08-31T03:13:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:smirki/UIGEN-FX-4B-Intermdiate-600",
"base_model:quantized:smirki/UIGEN-FX-4B-Intermdiate-600",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-31T02:53:29Z |
---
base_model: smirki/UIGEN-FX-4B-Intermdiate-600
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/smirki/UIGEN-FX-4B-Intermdiate-600
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UIGEN-FX-4B-Intermdiate-600-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermdiate-600-GGUF/resolve/main/UIGEN-FX-4B-Intermdiate-600.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756607905
|
rvipitkirubbe
| 2025-08-31T03:03:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T03:03:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ewe666/small-rp-models
|
ewe666
| 2025-08-31T02:51:58Z | 0 | 8 | null |
[
"region:us"
] | null | 2024-08-14T23:45:57Z |
Good story telling models that can fit in an RTX 3060 12GB. Updated July 2025.
# Models
- **Current favorite**: [nbeerbower/Lyra4-Gutenberg-12B](https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B)
- [Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4)
- [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503)
- [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)
- [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
- [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
- [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) (12B)
- [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
# Creators
- Whitelist: Sao, nbeerbower
- Blacklist: DavidAU, SicariusSicariiStuff, Allura
- Greylst: The Drummer
# Remarks
- Roleplay and storywriting are distinct tasks! Some models excel at one and fail at the other.
- Dont waste time on sampler settings; use recommended and optimize the prompt
- Don't "overparameterize" by writing too long a prompt
- Don't underestimate the original instruct models
# Links
- [llama.cpp](https://github.com/ggerganov/llama.cpp) and [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) - **preferred LLM software**
- [/r/localllama](https://www.reddit.com/r/LocalLLaMA/)
- /lmg/
- [LMSys Chatbot Arena Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
- [Uncensored General Intelligence Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
- [/r/SillyTavernAI](https://www.reddit.com/r/SillyTavernAI/)
- NothingiisReal discord
- NeverSleep discord
- SillyTavern discord
- BeaverAI discord
|
bah63843/blockassist-bc-plump_fast_antelope_1756608614
|
bah63843
| 2025-08-31T02:51:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T02:51:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756608389
|
sekirr
| 2025-08-31T02:47:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T02:47:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/UIGEN-FX-4B-Intermediate-GGUF
|
mradermacher
| 2025-08-31T02:45:31Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"sft",
"en",
"base_model:smirki/UIGEN-FX-4B-Intermediate",
"base_model:quantized:smirki/UIGEN-FX-4B-Intermediate",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-31T02:22:39Z |
---
base_model: smirki/UIGEN-FX-4B-Intermediate
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/smirki/UIGEN-FX-4B-Intermediate
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UIGEN-FX-4B-Intermediate-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-FX-4B-Intermediate-GGUF/resolve/main/UIGEN-FX-4B-Intermediate.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
allstax/editorial-qwen-v2-adapter
|
allstax
| 2025-08-31T02:43:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T02:42:52Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** allstax
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joker009/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-carnivorous_stalking_macaque
|
joker009
| 2025-08-31T02:39:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am carnivorous_stalking_macaque",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T02:02:52Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am carnivorous_stalking_macaque
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NoemaResearch/Daedalus-1-8B
|
NoemaResearch
| 2025-08-31T02:38:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:ByteDance-Seed/Seed-Coder-8B-Reasoning",
"base_model:finetune:ByteDance-Seed/Seed-Coder-8B-Reasoning",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T06:30:07Z |
---
base_model:
- ByteDance-Seed/Seed-Coder-8B-Reasoning
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: mit
language:
- en
---

# Daedalus-1-8B
[](https://huggingface.co/NoemaResearch/Daedalus-1-8B)
[](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning)
[](LICENSE)
Daedalus-1-8B is an 8 billion parameter language model for code generation and reasoning, developed by **Noema Research**.
It is a finetuned derivative of [Seed-Coder-8B-Reasoning](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning),
with enhancements for instruction following, structured code generation, and improved safety alignment.
---
## Model Overview
- **Base model:** `ByteDance-Seed/Seed-Coder-8B-Reasoning`
- **Architecture:** Decoder-only transformer
- **Parameters:** ~8.25B
- **Context length:** Long-context support (up to ~64k tokens)
- **Domain:** Programming and natural language reasoning
- **Primary applications:**
- Code generation and completion
- Debugging and error explanation
- Unit test generation
- Structured outputs (e.g., JSON, function calls)
- **License:** MIT
---
## Key Improvements
Relative to the base model, Daedalus introduces targeted post-training improvements:
- **Instruction tuning** for developer-oriented tasks
- **Structured output fidelity**, supporting JSON and schema-constrained responses
- **Enhanced reasoning** for debugging and multi-step problem solving
- **Reduced error rate** in code execution benchmarks
- **Safety-oriented adjustments**, including avoidance of unsafe coding patterns
---
## Usage
The model is released in Hugging Face Transformers format. Example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "NoemaResearch/Daedalus-1-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
messages = [
{"role":"system", "content":"You are Daedalus, a coding assistant."},
{"role":"user", "content":"Write a memory-efficient quicksort in Python with unit tests."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.2, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
````
**Recommended settings:**
* `temperature=0.2–0.6` for deterministic code generation
* `top_p=0.9–0.95` for balanced creativity and correctness
---
## Evaluation
Daedalus inherits strong performance on competitive programming and reasoning tasks from Seed-Coder-8B-Reasoning.
Internal evaluations indicate:
* Higher **unit test pass rates**
* Improved **structured output validity**
* Reduced incidence of **hallucinated APIs**
A comprehensive benchmark report will be released in future updates.
For upstream benchmarks, please refer to the [Seed-Coder-8B-Reasoning model card](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning).
---
## Limitations
Daedalus remains subject to common limitations of large language models:
* **Hallucinated libraries or functions:** the model may generate non-existent APIs
* **Insecure coding patterns:** suggestions should be reviewed for security and safety
* **Reasoning errors:** multi-step solutions may fail on complex edge cases
* **Dependence on prompt quality:** outputs are sensitive to phrasing and context
All generated code should be verified, linted, and tested before use in production.
---
## Responsible Use
* Do not provide secrets or credentials in prompts.
* Use outputs only in controlled, sandboxed, or reviewed environments.
* The model should not be employed for generating malicious software or unsafe code.
* We encourage the use of additional guardrails (static analyzers, test harnesses, execution sandboxes) in deployment contexts.
---
## Model Variants
* **Full-precision (safetensors)** — for research and high-fidelity inference
* **bf16 / fp16** — for efficient inference on modern accelerators
* **Quantized variants (int8, int4)** — for resource-constrained environments
---
## Citation
If you use this model, please cite both Daedalus and the underlying Seed-Coder base model:
```bibtex
@misc{noema2025daedalus,
title={Daedalus-1-8B},
author={Noema Research},
year={2025},
howpublished={\url{https://huggingface.co/NoemaResearch/Daedalus-1-8B}}
}
```
---
## Acknowledgements
Daedalus builds upon the [Seed-Coder](https://huggingface.co/ByteDance-Seed) family of models developed by ByteDance-Seed.
We thank the Seed team for releasing their models under permissive terms, enabling further research and refinement.
|
AnonymousCS/populism_classifier_338
|
AnonymousCS
| 2025-08-31T02:37:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_large_cased",
"base_model:finetune:AnonymousCS/populism_english_bert_large_cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-31T02:34:30Z |
---
library_name: transformers
license: apache-2.0
base_model: AnonymousCS/populism_english_bert_large_cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_338
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_338
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_cased](https://huggingface.co/AnonymousCS/populism_english_bert_large_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3150
- Accuracy: 0.9412
- 1-f1: 0.4082
- 1-recall: 0.3226
- 1-precision: 0.5556
- Balanced Acc: 0.6526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.242 | 1.0 | 31 | 0.4530 | 0.9209 | 0.4935 | 0.6129 | 0.4130 | 0.7772 |
| 0.0589 | 2.0 | 62 | 0.5531 | 0.9290 | 0.5205 | 0.6129 | 0.4524 | 0.7816 |
| 0.0347 | 3.0 | 93 | 0.7793 | 0.9229 | 0.4722 | 0.5484 | 0.4146 | 0.7482 |
| 0.0249 | 4.0 | 124 | 1.3150 | 0.9412 | 0.4082 | 0.3226 | 0.5556 | 0.6526 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756604397
|
sampingkaca72
| 2025-08-31T02:07:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T02:07:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-snappy_tenacious_eagle_1756605987
|
AnerYubo
| 2025-08-31T02:06:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snappy tenacious eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T02:06:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snappy tenacious eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_329
|
AnonymousCS
| 2025-08-31T02:05:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_large_cased",
"base_model:finetune:AnonymousCS/populism_english_bert_large_cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-31T02:02:50Z |
---
library_name: transformers
license: apache-2.0
base_model: AnonymousCS/populism_english_bert_large_cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_329
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_329
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_cased](https://huggingface.co/AnonymousCS/populism_english_bert_large_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8195
- Accuracy: 0.9486
- 1-f1: 0.5614
- 1-recall: 0.5714
- 1-precision: 0.5517
- Balanced Acc: 0.7715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.28 | 1.0 | 31 | 0.4135 | 0.9403 | 0.5085 | 0.5357 | 0.4839 | 0.7504 |
| 0.4878 | 2.0 | 62 | 0.5332 | 0.9444 | 0.5714 | 0.6429 | 0.5143 | 0.8029 |
| 0.0097 | 3.0 | 93 | 0.7666 | 0.9424 | 0.5484 | 0.6071 | 0.5 | 0.7850 |
| 0.015 | 4.0 | 124 | 0.8195 | 0.9486 | 0.5614 | 0.5714 | 0.5517 | 0.7715 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ruggsea/dante-zero-20250830-Pleias-350m-Preview
|
ruggsea
| 2025-08-31T02:02:34Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-30T06:27:05Z |
# Dante-Zero Fine-tuned Model
This model was fine-tuned using Reinforcement Learning with Generative Pre-trained Transformer Optimization (GRPO) to generate Dante-style poetry in endecasillabi (11-syllable lines).
## Model Details
- **Base Model:** PleIAs/Pleias-350m-Preview
- **Training Method:** GRPO (Generative Pre-trained Transformer Optimization)
- **Training Data:** 1,000 chunks from Dante's Divine Comedy
- **Epochs:** 10
- **Trained By:** ruggsea
- **Date:** 2025-08-31
- **Run Name:** dante-zero-20250830-Pleias-350m-Preview
## Model Description
This model is specialized in generating Italian poetry in the style of Dante Alighieri's Divine Comedy. It has been trained to:
1. Generate proper endecasillabi (11-syllable lines)
2. Follow the structure of Dante's poetry
3. Avoid repetition
4. Create original content (not plagiarize the Divine Comedy)
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("ruggsea/dante-zero-20250830-Pleias-350m-Preview")
tokenizer = AutoTokenizer.from_pretrained("ruggsea/dante-zero-20250830-Pleias-350m-Preview", padding_side="left")
# Ensure proper tokenizer settings
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
# Generate poetry
prompt = "Nel mezzo del cammin di nostra vita"
inputs = tokenizer(prompt, return_tensors="pt", padding_side="left")
outputs = model.generate(
inputs.input_ids,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
## Reward Functions
The model was trained using several reward functions:
1. **Endecasillabo Checker:** Rewards proper 11-syllable lines
2. **Plagiarism Checker:** Penalizes copying from the Divine Comedy
3. **Verse Structure Checker:** Encourages verse-like structure
4. **Repetition Penalty:** Discourages repetitive text
|
Completo-video-do-Edith-Lupya/Ver.Viral.video.Coldplay.Edith.viral.en.twitter.y.telegram
|
Completo-video-do-Edith-Lupya
| 2025-08-31T01:35:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-31T01:34:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
bamposam/blockassist-bc-lumbering_tawny_gecko_1756604014
|
bamposam
| 2025-08-31T01:34:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering tawny gecko",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T01:34:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering tawny gecko
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/zeta-3b-GGUF
|
mradermacher
| 2025-08-31T01:32:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:Woutermans/zeta-3b",
"base_model:quantized:Woutermans/zeta-3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T01:02:06Z |
---
base_model: Woutermans/zeta-3b
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Woutermans/zeta-3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zeta-3b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/zeta-3b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-3b-GGUF/resolve/main/zeta-3b.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF
|
mradermacher
| 2025-08-31T01:26:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chemistry",
"code",
"math",
"grpo",
"conversational",
"moe",
"en",
"base_model:Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2",
"base_model:quantized:Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T00:26:10Z |
---
base_model: Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2
language:
- en
library_name: transformers
license: llama3.2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- chemistry
- code
- math
- grpo
- conversational
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Superthoughts-lite-v2-MOE-Llama3.2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q3_K_S.gguf) | Q3_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.IQ4_XS.gguf) | IQ4_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q5_K_S.gguf) | Q5_K_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.Q8_0.gguf) | Q8_0 | 4.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Superthoughts-lite-v2-MOE-Llama3.2-GGUF/resolve/main/Superthoughts-lite-v2-MOE-Llama3.2.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen3-32B-AWorld-GGUF
|
mradermacher
| 2025-08-31T01:25:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:inclusionAI/Qwen3-32B-AWorld",
"base_model:quantized:inclusionAI/Qwen3-32B-AWorld",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T20:17:39Z |
---
base_model: inclusionAI/Qwen3-32B-AWorld
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/inclusionAI/Qwen3-32B-AWorld
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-32B-AWorld-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-32B-AWorld-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q5_K_M.gguf) | Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-AWorld-GGUF/resolve/main/Qwen3-32B-AWorld.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qing223101/blockassist-bc-coiled_stinging_hummingbird_1756601810
|
qing223101
| 2025-08-31T01:25:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled stinging hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T01:24:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled stinging hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Casual-Autopsy/Nox-Personal-Quant-Storage_GGUF
|
Casual-Autopsy
| 2025-08-31T01:18:55Z | 213 | 0 | null |
[
"gguf",
"imatrix",
"custom_gguf",
"personalized_gguf",
"rp",
"roleplay",
"text-generation",
"en",
"base_model:Casual-Autopsy/CREC-n-WREC-Mate-24B-v2",
"base_model:quantized:Casual-Autopsy/CREC-n-WREC-Mate-24B-v2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-21T18:14:33Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- imatrix
- custom_gguf
- personalized_gguf
- rp
- roleplay
base_model:
- TheDrummer/Cydonia-24B-v4.1
- aixonlab/Eurydice-24b-v2
- ReadyArt/Broken-Tutu-24B
- Casual-Autopsy/CREC-n-WREC-Mate-24B-v2
- SlerpE/CardProjector-24B-v3
---
Stroage of some personally made GGUF quants.</br></br>
Imatrices are made with 1 million tokens of coding, math, english, and gutenberg book data.</br>
Quants are custom `IQ4_XS` quants with `Q5_K` ffn_down and attn_output tensors, `Q5_K`/`IQ4_NL`/`IQ4_XS` mixed ffn_up tensors, `Q5_K`/`IQ4_XS` mixed ffn_gate tensors, and `Q8_0` token embed and output tensors.</br>
The Quants are design specifically for my PC specs(Intel i9 + RTX 4080 SUPER) to retain as much performance as possible while still allowing prompt proccessing speeds of +10% of max ctx(`16k` = `1.6k T/s`) and generation speeds above `15 T/s`
|
aquiffoo/aquif-3.5-8B-Think
|
aquiffoo
| 2025-08-31T01:05:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"language",
"aquif",
"text-generation-inference",
"math",
"coding",
"small",
"aquif-3.5",
"conversational",
"en",
"de",
"it",
"pt",
"fr",
"hi",
"es",
"th",
"zh",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-30T22:45:46Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- aquif
- text-generation-inference
- math
- coding
- small
- aquif-3.5
language:
- en
- de
- it
- pt
- fr
- hi
- es
- th
- zh
- ja
---
# aquif-3.5
The aquif-3.5 series is the successor to aquif-3, featuring a simplified naming scheme, expanded Mixture of Experts (MoE) options, and across-the-board performance improvements. This release streamlines model selection while delivering enhanced capabilities across reasoning, multilingual support, and general intelligence tasks.
## Model Repository Links
| Model | HuggingFace Repository |
|-------|----------------------|
| aquif-3.5-A0.6B-Preview | [aquiffoo/aquif-3.5-A0.6B-Preview](https://huggingface.co/aquiffoo/aquif-3.5-A0.6B-Preview) |
| aquif-3.5-3B | [aquiffoo/aquif-3.5-3B](https://huggingface.co/aquiffoo/aquif-3.5-3B) |
| aquif-3.5-7B | [aquiffoo/aquif-3.5-7B](https://huggingface.co/aquiffoo/aquif-3.5-7B) |
| aquif-3.5-8B-Think | [aquiffoo/aquif-3.5-8B-Think](https://huggingface.co/aquiffoo/aquif-3.5-8B-Think) |
| aquif-3.5-A4B-Think | [aquiffoo/aquif-3.5-A4B-Think](https://huggingface.co/aquiffoo/aquif-3.5-A4B-Think) |
## Model Overview
| Model | Size (B) | Active Params (B) | Reasoning | MoE | Multilingual | MMLU | Context Window |
|-------|----------|-------------------|-----------|-----|--------------|------|----------------|
| aquif-3.5-A0.6B | 2.61 | 0.6 | ❌ | ✅ | ✅ | 60.5% | 4k |
| aquif-3.5-3B | 2.67 | 2.67 | ❌ | ❌ | ✅ | 70.2% | 32k |
| aquif-3.5-7B | 7.3 | 7.3 | ❌ | ❌ | ✅ | 78.5% | 16k |
| aquif-3.5-8B-Think | 8.2 | 8.2 | ✅ | ❌ | ✅ | 81.1% | 40k |
| aquif-3.5-A4B-Think | 12 | 4 | ✅ | ✅ | ✅ | 86.9% | 128k |
## Model Details
### aquif-3.5-A0.6B (Experimental MoE)
An experimental small-scale Mixture of Experts model designed for multilingual applications with minimal computational overhead. Despite its compact active parameter count, it demonstrates competitive performance against larger dense models.
**Performance Comparison:**
| Metric | aquif-3.5 (2.6B A0.6B) | Qwen3 (0.8B) | LFM2 (0.7B) | aquif-3 (0.4B) |
|--------|------------------------|--------------|-------------|----------------|
| MMLU | 60.5 | 44.9 | 49.9 | 55.6 |
| GPQA | 30.2 | 22.1 | 28.5 | 28.5 |
| GSM8K | 50.7 | 36.5 | 46.4 | 52.1 |
| HumanEval | 45.2 | 36.0 | 40.0 | 37.4 |
| **Average** | **46.7** | **34.9** | **41.2** | **43.4** |
### aquif-3.5-3B (State-of-the-Art Dense)
The new standard for small dense models, offering optimal performance-per-parameter efficiency for general-purpose applications.
**Performance Comparison:**
| Metric | aquif-3.5 (2.7B) | EXAONE 3.5 (2.4B) | Qwen3 (4B) | Gemma 3 (4B) | Phi-4-mini (3.8B) | Apriel-5B-Instruct (4.8B) | aquif-3 (3.2B) |
|--------|------------------|-------------------|------------|--------------|-------------------|---------------------------|----------------|
| MMLU (General Knowledge) | 70.2 | 60.4 | 70.4 | 59.6 | 67.3 | 64.6 | 67.5 |
| GPQA Diamond (Science) | 35.8 | 28.4 | 39.3 | 30.9 | 25.2 | 28.4 | 36.1 |
| LiveCodeBench (Coding) | 23.1 | 12.5 | 21.3 | 11.2 | 10.4 | 11.6 | 15.4 |
| IFEval (Instruction Following) | 78.9 | 73.6 | 71.2 | 80.2 | 68.6 | 80.8 | 78.9 |
| AIME 2025 (Competition Math) | 13.4 | 4.5 | 9.8 | 12.7 | 5.3 | 4.3 | 9.6 |
| **Average** | **44.3** | **35.9** | **42.4** | **38.9** | **35.4** | **37.9** | **41.5** |
### aquif-3.5-7B (Multilingual Long Context)
A Qwen-based architecture optimized for multilingual applications with extended context capabilities, delivering state-of-the-art performance in its size class.
**Performance Comparison:**
| Metric | aquif-3.5 (7.3B) | EXAONE 3.5 (7.8B) | Qwen3 (8.2B) | Gemma 3 (12B) | Llama 3.1 (8B) | Kanana 1.5 (8B) | aquif-3 (3.2B) |
|--------|------------------|-------------------|-------------|---------------|----------------|-----------------|----------------|
| MMLU (General Knowledge) | 78.5 | 72.2 | 82.9 | 74.5 | 69.2 | 68.8 | 67.5 |
| GPQA Diamond (Science) | 42.3 | 39.4 | 39.3 | 40.9 | 32.8 | 37.5 | 36.1 |
| LiveCodeBench (Coding) | 21.3 | 18.0 | 23.9 | 13.7 | 10.8 | 16.5 | 15.4 |
| IFEval (Instruction Following) | 85.6 | 82.6 | 85.4 | 80.2 | 75.0 | 80.1 | 78.9 |
| AIME 2025 (Competition Math) | 23.4 | 18.3 | 20.9 | 18.8 | 2.7 | 13.4 | 9.6 |
| **Average** | **50.2** | **46.1** | **50.4** | **45.6** | **38.1** | **43.3** | **41.5** |
### aquif-3.5-8B-Think & aquif-3.5-A4B-Think (Reasoning Models)
Advanced reasoning-capable models designed for complex problem-solving tasks. The A4B variant leverages MoE architecture for enhanced efficiency while maintaining superior reasoning performance.
**Performance Comparison:**
| Metric | aquif-3.5 (12B A4B) | aquif-3.5 (8B) | Qwen3 Thinking 2507 (31B A3B) | gpt-oss-20b (21B A4B) | Nemotron Nano v2 (9B) | Solar Pro 2 |
|--------|---------------------|-----------------|-------------------------------|----------------------|----------------------|-------------|
| MMLU-Pro | 78.5 | 78.1 | 80.5 | 73.6 | 74.2 | 80.5 |
| GPQA Diamond | 70.8 | 66.8 | 70.7 | 61.7 | 64.0 | 68.7 |
| AIME 2025 | 84.4 | 81.4 | 56.3 | 61.7 | 69.7 | 61.3 |
| LiveCodeBench | 66.1 | 61.5 | 70.7 | 72.1 | 71.1 | 61.6 |
| Humanity's Last Exam | 8.9 | 8.2 | 9.8 | 8.5 | 6.5 | 7.0 |
| TAU-Bench v2 (avg) | 43.7 | 36.8 | 35.7 | 43.2 | 34.9 | 38.7 |
| **Average** | **58.7** | **55.5** | **54.0** | **53.5** | **53.4** | **53.0** |
## Key Improvements Over aquif-3
- **Simplified Naming**: Clear size-based nomenclature for easier model selection
- **Enhanced MoE Support**: Multiple MoE configurations across different model sizes
- **Reasoning Capabilities**: Dedicated thinking models for complex problem-solving
- **Extended Context**: Up to 128k context window for long-form applications
- **Multilingual by Default**: Native multilingual support across all variants
- **Performance Gains**: 5-15% improvement across benchmarks compared to aquif-3
## Usage Recommendations
- **aquif-3.5-A0.6B**: Experimental applications, resource-constrained environments
- **aquif-3.5-3B**: General-purpose applications, balanced performance/efficiency
- **aquif-3.5-7B**: Multilingual applications, long-context tasks
- **aquif-3.5-8B-Think**: Complex reasoning, scientific analysis
- **aquif-3.5-A4B-Think**: Advanced reasoning with efficiency optimization
## Technical Specifications
All models support:
- BF16 and FP16 precision
- Standard transformer architecture optimizations
- Efficient attention mechanisms
- Multi-head attention with optimized KV caching
## Acknowledgements
- **Qwen Team**: Base architecture for 7B, 8B, and 12B-A4B models
- **Meta Llama Team**: Base architecture for 3B and 2.6B-A0.6B models
- **Hugging Face**: Model hosting infrastructure and training libraries
## License
This project is released under the Apache 2.0 License. See LICENSE file for details.
---
*Made in 🇧🇷*
© 2025 aquif AI. All rights reserved.
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756600494
|
Loder-S
| 2025-08-31T00:58:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T00:58:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
diegogari23/noctatherion-llama3-8b-qLoRA
|
diegogari23
| 2025-08-31T00:51:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T00:50:25Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** diegogari23
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
giovannidemuri/llama8b-er-v501-seed2-hx
|
giovannidemuri
| 2025-08-31T00:49:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T21:43:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sagarhonnungar/sd-class-butterflies-32
|
sagarhonnungar
| 2025-08-31T00:47:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-08-31T00:46:00Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('sagarhonnungar/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF
|
mradermacher
| 2025-08-31T00:38:31Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:xxrjun/gpt-oss-120b-multilingual-reasoner-fp32",
"base_model:finetune:xxrjun/gpt-oss-120b-multilingual-reasoner-fp32",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T13:57:27Z |
---
base_model: xxrjun/gpt-oss-120b-multilingual-reasoner-fp32
language:
- en
library_name: transformers
model_name: gpt-oss-120b-multilingual-reasoner
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/xxrjun/gpt-oss-120b-multilingual-reasoner-fp32
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-120b-multilingual-reasoner-fp32-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_S.gguf.part2of2) | Q3_K_S | 66.2 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q2_K.gguf.part2of2) | Q2_K | 66.3 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.IQ4_XS.gguf.part2of2) | IQ4_XS | 67.1 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_M.gguf.part2of2) | Q3_K_M | 71.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q3_K_L.gguf.part2of2) | Q3_K_L | 73.5 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q4_K_S.gguf.part2of2) | Q4_K_S | 81.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q4_K_M.gguf.part2of2) | Q4_K_M | 88.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q5_K_S.gguf.part2of2) | Q5_K_S | 88.1 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q5_K_M.gguf.part2of2) | Q5_K_M | 94.0 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q6_K.gguf.part3of3) | Q6_K | 124.3 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/gpt-oss-120b-multilingual-reasoner-fp32-GGUF/resolve/main/gpt-oss-120b-multilingual-reasoner-fp32.Q8_0.gguf.part3of3) | Q8_0 | 124.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Discord-Micae-8B-Preview-GGUF
|
mradermacher
| 2025-08-31T00:33:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"text-generation",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"8b-parameters",
"en",
"dataset:mookiezi/Discord-Dialogues",
"base_model:mookiezi/Discord-Micae-8B-Preview",
"base_model:quantized:mookiezi/Discord-Micae-8B-Preview",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-30T17:27:13Z |
---
base_model: mookiezi/Discord-Micae-8B-Preview
datasets:
- mookiezi/Discord-Dialogues
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 8b-parameters
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/mookiezi/Discord-Micae-8B-Preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Discord-Micae-8B-Preview-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-8B-Preview-GGUF/resolve/main/Discord-Micae-8B-Preview.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vendi11/blockassist-bc-placid_placid_llama_1756599404
|
vendi11
| 2025-08-31T00:17:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T00:17:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mrtoots/CaptainErisNebula-12B-Chimera-v1.1-mlx-8Bit
|
mrtoots
| 2025-08-31T00:17:17Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"en",
"base_model:Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1",
"base_model:quantized:Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1",
"license:other",
"8-bit",
"region:us"
] | null | 2025-08-31T00:09:03Z |
---
license: other
language:
- en
tags:
- mlx
base_model: Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1
---
# mrtoots/CaptainErisNebula-12B-Chimera-v1.1-mlx-8Bit
The Model [mrtoots/CaptainErisNebula-12B-Chimera-v1.1-mlx-8Bit](https://huggingface.co/mrtoots/CaptainErisNebula-12B-Chimera-v1.1-mlx-8Bit) was converted to MLX format from [Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1](https://huggingface.co/Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1) using mlx-lm version **0.26.4**.
# Toots' Note:
Prompt template and configuration [details here](https://huggingface.co/Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1)
Please support [Nitral's work](https://huggingface.co/Nitral-AI) if you like this model!
🦛 <span style="color:#800080">If you want a free consulting session, </span>[fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) <span style="color:#800080">to get in touch!</span> 🤗
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mrtoots/CaptainErisNebula-12B-Chimera-v1.1-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756597125
|
NahedDom
| 2025-08-31T00:13:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-31T00:13:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756596675
|
Loder-S
| 2025-08-30T23:57:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:57:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756598210
|
klmdr22
| 2025-08-30T23:57:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:57:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
spacepxl/Wan2.1_VACE_14B_fp8_scaled
|
spacepxl
| 2025-08-30T23:54:16Z | 0 | 0 | null |
[
"base_model:Wan-AI/Wan2.1-VACE-14B",
"base_model:finetune:Wan-AI/Wan2.1-VACE-14B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T23:42:33Z |
---
license: apache-2.0
base_model:
- Wan-AI/Wan2.1-VACE-14B
---
Full Wan2.1 14B VACE model, converted from fp16 to fp8_scaled, [using this script](https://gist.github.com/spacepxl/30fe4595e89ce912a76ef462c566b2d1).
|
arianaazarbal/standard_tpr_0.65-grpo_recontextualized_debug_1_20250830_234646-policy-adapter
|
arianaazarbal
| 2025-08-30T23:48:49Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-30T23:47:47Z |
# Policy Model LoRA Adapter (GRPO/DPO)
Experiment: standard_tpr_0.65
Timestamp: grpo_recontextualized_debug_1_20250830_234646
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Policy Model LoRA Adapter (GRPO/DPO)
- **Experiment Name**: standard_tpr_0.65
- **Training Timestamp**: grpo_recontextualized_debug_1_20250830_234646
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756595793
|
kojeklollipop
| 2025-08-30T23:42:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:42:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756596954
|
ggozzy
| 2025-08-30T23:37:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:37:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756596525
|
akirafudo
| 2025-08-30T23:29:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:29:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756596356
|
akirafudo
| 2025-08-30T23:26:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:26:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moutiope/blockassist-bc-powerful_thick_termite_1756595790
|
moutiope
| 2025-08-30T23:16:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful thick termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:16:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful thick termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756595671
|
liukevin666
| 2025-08-30T23:15:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:15:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
deadman44/Wan2.2_T2i_T2v_LoRA
|
deadman44
| 2025-08-30T23:14:19Z | 0 | 11 | null |
[
"text-to-image",
"t2i",
"wan video",
"safetensors",
"text-to-video",
"en",
"license:apache-2.0",
"region:us"
] |
text-to-video
| 2025-07-27T00:45:26Z |
---
license: apache-2.0
pipeline_tag: text-to-video
language:
- en
tags:
- text-to-image
- t2i
- wan video
- safetensors
---
<style>
.title{
font-size: 2.5em;
letter-spacing: 0.01em;
padding: 0.5em 0;
}
.thumbwidth{
max-width: 180px;
}
.font_red{
color:red;
}
.font_blue{
color:blue;
}
.font_grey{
color: #aaaaaa;
}
</style>
# models
- [Wan2.2_myob_v01](#myob) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-31<br />
- [Wan2.2_myjd_v01](#myjd) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-26<br />
- [Wan2.2_myjy_v01](#myjy) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-21<br />
- [Wan2.2_myjk_v01](#myjk) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-18<br />
- [Wan2.2_myjc_v01](#myjc) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-14<br />
- [Wan2.2_myjs_v01](#myjs) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-11<br />
- Add [Workflow page](https://huggingface.co/deadman44/Wan2.2_Workflow_for_myxx_series_LoRA): 2025-08-04<br />
---
<br>
- Workflow
### - [Sample Workflow for myxx series LoRA](https://huggingface.co/deadman44/Wan2.2_Workflow_for_myxx_series_LoRA)<br>
<br>
---
<a id="myob"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myob_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese OB face</span><br/>
<br/>
<br/>
# Download
[Download: myob_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myob_High_v01.safetensors?download=true) <br />
[Download: myob_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myob_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myob, japanese/european, photorealistic
and 23-30yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250831073513_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myob, japanese,
A Japanese woman, 30 years old, standing at kichen and holding pan.
She wears a white sweater and pink apron.
She has a brown bob hair.
She tilts her head slightly and smiles with closed lips.
A mole is visible on her neck.
She looks at the viewer calmly.
Motion: subtle breathing, head tilt
Style: photorealistic
Camera: medium close-up
Mood: serene
```
<br/>
---
<a id="myjd"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjd_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JD face</span><br/>
<br/>
<br/>
# Download
[Download: myjd_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjd_High_v01.safetensors?download=true) <br />
[Download: myjd_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjd_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjd, japanese/european, photorealistic
and 19-22yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250826065050_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
22yo, myjd, japanese,
A woman dressed in a maid costume carries coffee on a tray in a café. She has black hair tied in a ponytail and wears a maid headdress.
```
<br/>
---
<a id="myjk"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjk_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JK face</span><br/>
<br/>
<br/>
# Download
[Download: myjk_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjk_High_v01.safetensors?download=true) <br />
[Download: myjk_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjk_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjk, japanese/european, photorealistic
and 16-18yo
```
<br />
# Sample prompt (v01)
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i</strong>
<a href="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818092119_T2I_00001_.jpg" target="_blank">
<img src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818092119_T2I_00001_.jpg"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
18yo, myjk, japanese,
A photorealistic upper-body portrait of a beautiful young woman with long black hair and black eyes, dressed in a school uniform. She is sitting on a stool, smiling with one eye closed in a playful grin, showing her teeth. Her hand is raised gently near her face, and a hair ornament with a black bow. The background is softly blurred, enhancing the cinematic atmosphere. She looks directly at the viewer, evoking a sense of charm and realism.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818093735_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
18yo, myjk, japanese,
A Japanese idol is performing live on a brightly lit concert stage. She is wearing idol costume with Lace-up flared skirt. She sings and dances energetically, moving across the stage with graceful steps and expressive gestures. The camera follows her with dynamic motion: starting from a low-angle close-up of her smiling face, then pulling back to reveal the full stage with flashing lights and cheering fans. Her long hair flows with her movements, and her outfit sparkles under the spotlights. The scene includes cinematic lighting, fog effects, and smooth camera transitions that emphasize her presence and charm.
```
<br/>
---
<a id="myjc"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjc_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JC face</span><br/>
<br/>
<br/>
# Download
[Download: myjc_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjc_High_v01.safetensors?download=true) <br />
[Download: myjc_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjc_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjc, japanese/european, photorealistic
and 13-15yo
```
<br />
# Sample prompt (v01)
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i</strong>
<a href="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814111852_T2I_00001_.png" target="_blank">
<img src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814111852_T2I_00001_.png"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
15yo, myjc, japanese, photorealistic,
A girl in school unifrom sitting seat at train.
She has black hair with sidelocks.
She is holding a smartphone and looking at it.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814112118_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myjc, japanese, photorealistic,
Close-up portrait of a girl walking at street.
She has a black twintails.
She is wearing white blouse.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814112156_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
15yo, myjc, japanese, photorealistic,
A girl in school unifrom with short sleeves sitting chair at night time classroom.
She has black hair with sidelocks.
She is talking camera with smily.
```
<br/>
---
<a id="myjs"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjs_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JS face</span><br/>
<br/>
<br/>
# Download
[Download: myjs_High_v01](https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/lora_wan2.2_myjs_High_v01.safetensors?download=true)<br>
[Download: myjs_Low_v01](https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/lora_wan2.2_myjs_Low_v01.safetensors?download=true)<br>
<br />
# Trigger
```bash
(myjsh / myjsm / myjsl), japanese/european, photorealistic
and 6-12yo
```
<br />
# Sample prompt (v01)
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i</strong>
<a href="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811083806_T2I_LastImage_00001_.png?download=true" target="_blank">
<img src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811083806_T2I_LastImage_00001_.png"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
myjsh, japanese, photorealistic,
A Japanese girl with shoulder-length black hair, wearing a white textured blouse, standing outdoors in soft sunlight. She gently lifts her hand to brush her hair aside, as a breeze flows through the trees behind her. Her blouse flutters slightly, and her gaze shifts subtly toward the camera.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811084132_T2V_00001.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
12yo, myjsh, japanese, photorealistic,
A stylish girl posing for a fashion photoshoot in a minimalist studio. She wears a high-fashion outfit with layered textures: a translucent blouse over a structured corset, paired with wide-leg trousers. She shifts her pose gracefully, turning slightly to the side, adjusting her posture with subtle hand movements. Studio lights flash intermittently, casting soft shadows and highlights on her face and outfit. Her expression changes subtly from confident to playful. The camera slowly pans around her, capturing her elegance and motion. Cinematic lighting, fashion editorial style, photorealistic, expressive gesture, shallow depth of field, dynamic motion.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811104546_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
9yo, myjsm, japanese, photorealistic,
A girl wearing a white blouse and pleated skirt with suspenders walks the crowded school hallway.
She has a black ponytail.
Finally she turns around and smiles.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811112548_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
6yo, myjsl, japanese, photorealistic,
Girls are crossing the street with their one hand raised as their car waits.
```
<br/>
---
<a id="myjy"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjy_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JY face</span><br/>
<br/>
<br/>
# Download
[Download: myjy_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjy_High_v01.safetensors?download=true)<br>
[Download: myjy_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjy_Low_v01.safetensors?download=true)<br>
<br />
# Trigger
```bash
myjy, japanese/european, photorealistic
and 3-5yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250821095521_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myjy, japanese,
A heartwarming indoor scene of three cheerful kindergarten girls clasping their own hands together in playful prayer. They wear colorful long-sleeved uniforms with blunt bangs and varied hairstyles: black hair in twintails, brown short hair, and long hair with a cute hair ornament. One girl holds a picture book with animal illustrations, another giggles softly, and the third looks up with wide, curious eyes. Their fingers are gently interlocked, lips slightly parted in a whisper of joy, and their expressions glow with innocence and wonder. The softly blurred background shows a cozy classroom with pastel decorations, adding warmth and charm to the moment.
```
<br/>
---
|
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756595175
|
AnerYubo
| 2025-08-30T23:06:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive mammalian termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:06:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive mammalian termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moutiope/blockassist-bc-galloping_hardy_fish_1756595110
|
moutiope
| 2025-08-30T23:05:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"galloping hardy fish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T23:05:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- galloping hardy fish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bamposam/blockassist-bc-lumbering_tawny_gecko_1756594762
|
bamposam
| 2025-08-30T23:00:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering tawny gecko",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:59:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering tawny gecko
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amayuelas/Qwen3-4B-Wikirace-v6-SFT
|
amayuelas
| 2025-08-30T22:57:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:willcb/Qwen3-4B",
"base_model:finetune:willcb/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T22:28:16Z |
---
base_model: willcb/Qwen3-4B
library_name: transformers
model_name: Qwen3-4B-Wikirace-v6-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-Wikirace-v6-SFT
This model is a fine-tuned version of [willcb/Qwen3-4B](https://huggingface.co/willcb/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amayuelas/Qwen3-4B-Wikirace-v6-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ucsb-nlp/huggingface/runs/rdgyiooi)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756594412
|
ggozzy
| 2025-08-30T22:54:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:54:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1756593098
|
ypszn
| 2025-08-30T22:32:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:32:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756592965
|
bah63843
| 2025-08-30T22:30:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:30:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756592557
|
canoplos112
| 2025-08-30T22:24:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:23:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mssfj/Qwen3-4B_formatted-miromind-1000-grpo_prompt
|
mssfj
| 2025-08-30T22:21:47Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T22:21:46Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mssfj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ultratopaz/705188
|
ultratopaz
| 2025-08-30T22:12:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T22:12:04Z |
[View on Civ Archive](https://civarchive.com/models/420594?modelVersionId=791669)
|
klmdr22/blockassist-bc-wild_loud_newt_1756591353
|
klmdr22
| 2025-08-30T22:03:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T22:03:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
espnet/owsm_v3.1_ebf
|
espnet
| 2025-08-30T21:54:56Z | 324 | 17 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"multilingual",
"dataset:owsm_v3.1",
"arxiv:2401.16658",
"arxiv:2210.00077",
"arxiv:2406.09282",
"arxiv:2309.13876",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-12-22T19:23:14Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
language: multilingual
datasets:
- owsm_v3.1
license: cc-by-4.0
---
## OWSM: Open Whisper-style Speech Model
OWSM aims to develop fully open speech foundation models using publicly available data and open-source toolkits, including [ESPnet](https://github.com/espnet/espnet).
Inference examples can be found on our [project page](https://www.wavlab.org/activities/2024/owsm/).
Our demo is available [here](https://huggingface.co/spaces/pyf98/OWSM_v3_demo).
**[OWSM v3.1](https://arxiv.org/abs/2401.16658) is an improved version of OWSM v3. It significantly outperforms OWSM v3 in almost all evaluation benchmarks.**
We do not include any new training data. Instead, we utilize a state-of-the-art speech encoder, [E-Branchformer](https://arxiv.org/abs/2210.00077).
The model in this repo has 1.02B parameters in total and is trained on 180k hours of public speech data.
Specifically, it supports the following speech-to-text tasks:
- Speech recognition
- Any-to-any-language speech translation
- Utterance-level alignment
- Long-form transcription
- Language identification
### OWSM series
#### Encoder-decoder OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM v3.1 base | 101M | https://huggingface.co/espnet/owsm_v3.1_ebf_base |
| OWSM v3.1 small | 367M | https://huggingface.co/espnet/owsm_v3.1_ebf_small |
| OWSM v3.1 medium | 1.02B | https://huggingface.co/espnet/owsm_v3.1_ebf |
| OWSM v3.2 small | 367M | https://huggingface.co/espnet/owsm_v3.2 |
| OWSM v4 base | 102M | https://huggingface.co/espnet/owsm_v4_base_102M |
| OWSM v4 small | 370M | https://huggingface.co/espnet/owsm_v4_small_370M |
| OWSM v4 medium | 1.02B | https://huggingface.co/espnet/owsm_v4_medium_1B |
#### CTC-based OWSM
| Name | Size | Hugging Face Repo |
| :--- | ---: | :---------------- |
| OWSM-CTC v3.1 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.1_1B |
| OWSM-CTC v3.2 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v3.2_ft_1B |
| OWSM-CTC v4 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v4_1B |
### Citations
#### OWSM v4
```BibTex
@inproceedings{owsm-v4,
title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2025},
}
```
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756587822
|
acidjp
| 2025-08-30T21:46:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:46:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gradientdegen/task-13-microsoft-Phi-3.5-mini-instruct
|
gradientdegen
| 2025-08-30T21:38:01Z | 126 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-06T11:35:36Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756589248
|
Vasya777
| 2025-08-30T21:28:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:28:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756589075
|
ggozzy
| 2025-08-30T21:25:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T21:25:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mattiaferrarini/BERToli
|
mattiaferrarini
| 2025-08-30T21:10:43Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"music",
"song",
"lyrics",
"italian",
"it",
"base_model:dbmdz/bert-base-italian-xxl-cased",
"base_model:finetune:dbmdz/bert-base-italian-xxl-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-26T13:08:02Z |
---
license: mit
language:
- it
base_model:
- dbmdz/bert-base-italian-xxl-cased
tags:
- music
- song
- lyrics
- italian
pipeline_tag: fill-mask
library_name: transformers
---
# BERToli 🎶🇮🇹
## About the model
BERToli is a BERT model for Italian song lyrics. It was obtained via continued pretraining of [`dbmdz/bert-base-italian-xxl-cased`](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on ~106k Italian song lyrics from the [Genius Song Lyrics Dataset](https://www.kaggle.com/datasets/carlosgdcj/genius-song-lyrics-with-language-information).
The objective was Masked Language Modeling (MLM).
The training code is available on [GitHub](https://github.com/mattiaferrarini/BERToli).
## Evaluation
The base model and the adapted model were tested on a held-out set of ~6k songs with the following results:
| Model | MLM Loss | Perplexity |
|----------|----------|----------|
| Base | 1.94 | 6.95 |
| **BERToli** | **1.45** | **4.26** |
## Why BERToli?
[Pierangelo Bertoli](https://en.wikipedia.org/wiki/Pierangelo_Bertoli) (5 November 1942 – 7 October 2002) was an Italian singer-songwriter and poet.
|
mia-project-2025/pythia-1B-feature-extraction-wikitext-103
|
mia-project-2025
| 2025-08-30T21:09:24Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T20:27:44Z |
---
license: apache-2.0
---
# Pythia-1B Feature-Based Transfer Learning on WikiText-103
This repository contains a feature-based transfer learning experiment using the [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) model on the [WikiText-103](https://huggingface.co/datasets/Salesforce/wikitext) dataset.
The base model was **frozen**, and a lightweight trainable classification head was added for causal language modeling.
---
## Model Description
- **Base Model:** EleutherAI/pythia-1b
- **Training Paradigm:** Feature-based transfer learning (frozen base + new lightweight head)
- **Task:** Causal Language Modeling
- **Dataset:** WikiText-103 (raw v1)
The base model (`gpt_neox`) was frozen to retain pretrained knowledge. A new head (2-layer feedforward with ReLU and dropout) was trained on top of the hidden states for efficient adaptation.
---
## Training Setup
- **Framework:** Transformers + PyTorch
- **GPU:** Multi-GPU (CUDA enabled)
- **Batch size:** 8 (gradient accumulation: 2)
- **Sequence length (block size):** 1024
- **Optimizer:** AdamW
- **Learning rate:** 2e-4 with cosine decay
- **Epochs:** 10
- **Mixed Precision:** FP16
- **Callbacks:** Early stopping, custom metric logging
---
## Results
### Final Training Metrics
- **Training Loss:** 2.6275
- **Final Step Loss:** 2.4289
- **Gradient Norm:** 0.3317
- **Learning Rate at End:** 1.55e-06
### Evaluation Metrics (Epoch 10)
- **Evaluation Loss:** 2.5432
- **Evaluation Perplexity:** 12.72
- **Evaluation Runtime:** 1.6039s
- **Samples per Second:** 150.26
- **Steps per Second:** 4.99
---
## Usage
```python
from transformers import AutoTokenizer
import torch
from model import FrozenPythiaWithNewHead
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("./pythia-wikitext-feature")
# Load model
model = FrozenPythiaWithNewHead.from_pretrained("./pythia-wikitext-feature")
model.eval()
# Example
input_text = "The history of natural language processing"
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs["logits"]
next_token_id = torch.argmax(logits[:, -1, :], dim=-1)
print("Next token:", tokenizer.decode(next_token_id))
|
mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF
|
mradermacher
| 2025-08-30T20:52:38Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"ja",
"dataset:mpasila/ParallelFiction-Ja_En-1k-16k-Gemma-3-ShareGPT-Filtered",
"dataset:NilanE/ParallelFiction-Ja_En-100k",
"base_model:mpasila/Llama-3.1-Swallow-JP-EN-Translator-v1-8B",
"base_model:quantized:mpasila/Llama-3.1-Swallow-JP-EN-Translator-v1-8B",
"license:llama3.3",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-30T18:54:15Z |
---
base_model: mpasila/Llama-3.1-Swallow-JP-EN-Translator-v1-8B
datasets:
- mpasila/ParallelFiction-Ja_En-1k-16k-Gemma-3-ShareGPT-Filtered
- NilanE/ParallelFiction-Ja_En-100k
language:
- en
- ja
library_name: transformers
license:
- llama3.3
- gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mpasila/Llama-3.1-Swallow-JP-EN-Translator-v1-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Swallow-JP-EN-Translator-v1-8B-i1-GGUF/resolve/main/Llama-3.1-Swallow-JP-EN-Translator-v1-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756586907
|
canoplos112
| 2025-08-30T20:50:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:49:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dxtrmst/gemma-3-270m-korean-tutor-v2
|
Dxtrmst
| 2025-08-30T20:37:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-28T20:05:01Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: gemma-3-270m-korean-tutor-v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-270m-korean-tutor-v2
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dxtrmst/gemma-3-270m-korean-tutor-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jezehelfranca-future_music/gemma-korean-tutor-finetuning/runs/cb9qx24l)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756584944
|
canoplos112
| 2025-08-30T20:18:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:16:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/AceGPT-v1.5-7B-GGUF
|
mradermacher
| 2025-08-30T20:14:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"ar",
"zh",
"en",
"base_model:FreedomIntelligence/AceGPT-v1.5-7B",
"base_model:quantized:FreedomIntelligence/AceGPT-v1.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T15:37:21Z |
---
base_model: FreedomIntelligence/AceGPT-v1.5-7B
language:
- ar
- zh
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#AceGPT-v1.5-7B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/AceGPT-v1.5-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-7B-GGUF/resolve/main/AceGPT-v1.5-7B.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF
|
mradermacher
| 2025-08-30T20:01:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:cgato/Nemo-12b-CreativePretrain-HalfDone",
"base_model:quantized:cgato/Nemo-12b-CreativePretrain-HalfDone",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T17:02:50Z |
---
base_model: cgato/Nemo-12b-CreativePretrain-HalfDone
language:
- en
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/cgato/Nemo-12b-CreativePretrain-HalfDone
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nemo-12b-CreativePretrain-HalfDone-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-CreativePretrain-HalfDone-GGUF/resolve/main/Nemo-12b-CreativePretrain-HalfDone.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756584005
|
eusuf01
| 2025-08-30T20:00:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T20:00:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756583911
|
eusuf01
| 2025-08-30T19:59:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:59:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756583122
|
eusuf01
| 2025-08-30T19:46:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:45:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756582816
|
eusuf01
| 2025-08-30T19:41:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:40:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
addopptu/blockassist-bc-snorting_skittish_lizard_1756582786
|
addopptu
| 2025-08-30T19:40:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting skittish lizard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:39:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting skittish lizard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756582668
|
eusuf01
| 2025-08-30T19:39:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:38:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taewan2002/smolvla_libero_object
|
taewan2002
| 2025-08-30T19:26:04Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:aopolin-lv/libero_object_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-30T19:25:41Z |
---
base_model: lerobot/smolvla_base
datasets: aopolin-lv/libero_object_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756581211
|
eusuf01
| 2025-08-30T19:14:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:13:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756581094
|
eusuf01
| 2025-08-30T19:12:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:11:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756580414
|
eusuf01
| 2025-08-30T19:00:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T19:00:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bin12345/qwen2_5vl_venus_ground-7b_2561_1ep_sft
|
Bin12345
| 2025-08-30T18:34:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:inclusionAI/UI-Venus-Ground-7B",
"base_model:finetune:inclusionAI/UI-Venus-Ground-7B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-30T18:30:17Z |
---
library_name: transformers
license: other
base_model: inclusionAI/UI-Venus-Ground-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [inclusionAI/UI-Venus-Ground-7B](https://huggingface.co/inclusionAI/UI-Venus-Ground-7B) on the mllm_demo dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
stablellama/murderboots_flux_250829_R03_02
|
stablellama
| 2025-08-30T18:19:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"image-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-30T17:34:00Z |
---
license: other
base_model: "flux/unknown-model"
tags:
- flux
- flux-diffusers
- text-to-image
- image-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
pipeline_tag: text-to-image
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'A close-up, low-angle shot of a person''s bare legs wearing murderboots, showing red painted toenails. The person is standing on a dark, reflective floor. The background is dimly lit and filled with warm bokeh lights, creating an elegant night-time atmosphere.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
- text: 'Full body shot of a woman standing on a wet, gritty city street after a rainstorm. She is wearing a long black trench coat, sunglasses, and a pair of murderboots. The boots are splashing in a puddle. The background is a blurred cityscape with reflections on the wet pavement. Photorealistic, cinematic.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_2_0.png
- text: 'A full-length studio photograph of two women standing side-by-side on a light grey background with soft studio lighting. The woman on the left is wearing sleek, black knee-high leather boots. The woman on the right is wearing a pair of black murderboots with silver buckles and peep-toes.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_3_0.png
- text: 'A candid, full-body photograph capturing the chaotic energy backstage at a high-fashion show. Three models are in frame. The model in the center is looking into a mirror, wearing a short, black leather skirt and a pair of murderboots. To the left, another model is sitting down, lacing up a pair of rugged, black leather combat boots, paired with cargo pants. On the right, a third model is walking past, wearing a long silk gown and classic, sharp stiletto pumps. The scene is lit by a mix of harsh overhead spotlights and the soft glow of vanity mirror lights, with clothing racks and equipment cases blurred in the background. Photorealistic, high detail.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_4_0.png
---
# murderboots_flux_250829_R03_02
This is a LyCORIS adapter derived from [flux/unknown-model](https://huggingface.co/flux/unknown-model).
No validation prompt was used during training.
None
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `1024x1024`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 4
- Training steps: 280
- Learning rate: 0.0005
- Learning rate schedule: polynomial
- Warmup steps: 50
- Max grad value: 1.0
- Effective batch size: 4
- Micro-batch size: 4
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow_matching (extra parameters=['flow_schedule_auto_shift', 'shift=0.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0'])
- Optimizer: optimi-lion (config=weight_decay=1e-3)
- Trainable parameter precision: Pure BF16
- Base model precision: `int8-quanto`
- Caption dropout probability: 0.0%
### LyCORIS Config:
```json
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"linear_alpha": 1,
"factor": 24,
"use_scalar": true,
"full_matrix": true,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 24
},
"FeedForward": {
"factor": 12
}
}
}
}
```
## Datasets
### regularisation-data-1024
- Repeats: 0
- Total number of images: 28
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: Yes
### regularisation-data
- Repeats: 2
- Total number of images: 28
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: Yes
### clothing-1024-image
- Repeats: 0
- Total number of images: 28
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### clothing-512-image
- Repeats: 2
- Total number of images: 28
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
def download_adapter(repo_id: str):
import os
from huggingface_hub import hf_hub_download
adapter_filename = "pytorch_lora_weights.safetensors"
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
os.makedirs(path_to_adapter, exist_ok=True)
hf_hub_download(
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
)
return path_to_adapter_file
model_id = '/root/FLUX.1-dev/'
adapter_repo_id = 'stablellama/murderboots_flux_250829_R03_02'
adapter_filename = 'pytorch_lora_weights.safetensors'
adapter_file_path = download_adapter(repo_id=adapter_repo_id)
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
wrapper.merge_to()
prompt = "An astronaut is riding a horse through the jungles of Thailand."
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
model_output.save("output.png", format="PNG")
```
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1756575143
|
milliarderdol
| 2025-08-30T18:12:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T18:11:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
faztasia/babylofi
|
faztasia
| 2025-08-30T18:11:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T18:09:07Z |
---
license: apache-2.0
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756577384
|
ggozzy
| 2025-08-30T18:10:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T18:10:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rahulseetharaman/reranker-ettin-encoder-150m-msmarco-bce-10m
|
rahulseetharaman
| 2025-08-30T17:36:28Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:9960000",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"arxiv:1908.10084",
"base_model:jhu-clsp/ettin-encoder-150m",
"base_model:finetune:jhu-clsp/ettin-encoder-150m",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-30T17:36:13Z |
---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:9960000
- loss:BinaryCrossEntropyLoss
base_model: jhu-clsp/ettin-encoder-150m
datasets:
- sentence-transformers/msmarco
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on jhu-clsp/ettin-encoder-150m
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.6651
name: Map
- type: mrr@10
value: 0.6587
name: Mrr@10
- type: ndcg@10
value: 0.7166
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.3859
name: Map
- type: mrr@10
value: 0.5643
name: Mrr@10
- type: ndcg@10
value: 0.4197
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.691
name: Map
- type: mrr@10
value: 0.7127
name: Mrr@10
- type: ndcg@10
value: 0.743
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.5807
name: Map
- type: mrr@10
value: 0.6453
name: Mrr@10
- type: ndcg@10
value: 0.6264
name: Ndcg@10
---
# CrossEncoder based on jhu-clsp/ettin-encoder-150m
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jhu-clsp/ettin-encoder-150m](https://huggingface.co/jhu-clsp/ettin-encoder-150m) on the [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jhu-clsp/ettin-encoder-150m](https://huggingface.co/jhu-clsp/ettin-encoder-150m) <!-- at revision 45d08642849e5c5701b162671ac811b7654bfd9f -->
- **Maximum Sequence Length:** 7999 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("rahulseetharaman/reranker-ettin-encoder-150m-msmarco-bce-10m")
# Get scores for pairs of texts
pairs = [
['select committees definition government', 'There are four types of congressional committees: 1 Standing committees, which continue from one Congress to the next, are probably the most important type because they consider and shape the vast majority of proposed laws. 2 Select committees are temporarily formed for specific purposes, often to study a particular issue.'],
['what is a perceptual map', 'Welcome to our New Castle, Pennsylvania street map page. The street map of New Castle PA that is located below is provided by Google Maps. You can grab the New Castle Pennsylvania street map and move it around to re-centre the map. You can change between standard map view, satellite map view and hybrid map view.'],
['what makes your skin feel cold and burn', 'When the wind blows in cold weather, you feel colder than the actual temperature because the air blows away heat from your skin faster. For instance, if the temperature is -17.8 Celsius (0 Fahrenheit) and the wind blows at 15 mph, it feels like -28.3 Celsius (-19 Fahrenheit) -- exposed skin can freeze in 30 minutes.'],
['average act score for university of georgia', 'A graph of UB, University at Buffalo GPA, SAT score, and ACT score admissions data for students who were accepted, rejected, and waitlisted. A graph of UB, University at Buffalo GPA, SAT score, and ACT score admissions data for students who were accepted, rejected, and waitlisted. University at Buffalo GPA, SAT and ACT Data Search the site GO'],
['when was the ontario, ca, post office established', 'In 1832 Jed Jackson had the contract for carrying mail from Brantford to London twice a week along the Old Stage Road. On October 6, 1835, a post office was established at Woodstock, Ontario, with Princeton following within two years. According to the Legislative Council Sessional Papers for 1846, a post office was established at Princeton on May 6, 1836 and Jeremiah Cowin was appointed postmaster on May 9, 1837. The sureties were George Beamer and Silas Martin to the amount of £200. The assistant was John Charles.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'select committees definition government',
[
'There are four types of congressional committees: 1 Standing committees, which continue from one Congress to the next, are probably the most important type because they consider and shape the vast majority of proposed laws. 2 Select committees are temporarily formed for specific purposes, often to study a particular issue.',
'Welcome to our New Castle, Pennsylvania street map page. The street map of New Castle PA that is located below is provided by Google Maps. You can grab the New Castle Pennsylvania street map and move it around to re-centre the map. You can change between standard map view, satellite map view and hybrid map view.',
'When the wind blows in cold weather, you feel colder than the actual temperature because the air blows away heat from your skin faster. For instance, if the temperature is -17.8 Celsius (0 Fahrenheit) and the wind blows at 15 mph, it feels like -28.3 Celsius (-19 Fahrenheit) -- exposed skin can freeze in 30 minutes.',
'A graph of UB, University at Buffalo GPA, SAT score, and ACT score admissions data for students who were accepted, rejected, and waitlisted. A graph of UB, University at Buffalo GPA, SAT score, and ACT score admissions data for students who were accepted, rejected, and waitlisted. University at Buffalo GPA, SAT and ACT Data Search the site GO',
'In 1832 Jed Jackson had the contract for carrying mail from Brantford to London twice a week along the Old Stage Road. On October 6, 1835, a post office was established at Woodstock, Ontario, with Princeton following within two years. According to the Legislative Council Sessional Papers for 1846, a post office was established at Princeton on May 6, 1836 and Jeremiah Cowin was appointed postmaster on May 9, 1837. The sureties were George Beamer and Silas Martin to the amount of £200. The assistant was John Charles.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.6651 (+0.1755) | 0.3859 (+0.1249) | 0.6910 (+0.2714) |
| mrr@10 | 0.6587 (+0.1812) | 0.5643 (+0.0645) | 0.7127 (+0.2861) |
| **ndcg@10** | **0.7166 (+0.1762)** | **0.4197 (+0.0947)** | **0.7430 (+0.2423)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.5807 (+0.1906) |
| mrr@10 | 0.6453 (+0.1773) |
| **ndcg@10** | **0.6264 (+0.1711)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 9,960,000 training samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 9 characters</li><li>mean: 33.93 characters</li><li>max: 110 characters</li></ul> | <ul><li>min: 80 characters</li><li>mean: 348.08 characters</li><li>max: 897 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>nap in chinese</code> | <code>continued... Most children from about 21 to 36 months of age still need one nap a day, which may range from one to three and a half hours long.They typically go to bed between 7 and 9 p.m. and wake up between 6 and 8 a.m. 3-6 Years Old: 10 - 12 hours per day.ontinued... Most children from about 21 to 36 months of age still need one nap a day, which may range from one to three and a half hours long.</code> | <code>0.0</code> |
| <code>what abdominal organ is most frequently injured as a result of blunt trauma?</code> | <code>Bochdalek Hernia. Bochdalek hernia is a congenital posterolateral diaphragmatic defect that is a result of failed closure of the pleuroperitoneal ducts -- a primitive communications between the pleural and abdominal cavities -- at 8 weeks' gestation.ochdalek Hernia. Bochdalek hernia is a congenital posterolateral diaphragmatic defect that is a result of failed closure of the pleuroperitoneal ducts -- a primitive communications between the pleural and abdominal cavities -- at 8 weeks' gestation.</code> | <code>0.0</code> |
| <code>where is round rock tx</code> | <code>Driving distance from Dallas, TX to Fort Worth, TX The total driving distance from Dallas, TX to Fort Worth, TX is 33 miles or 53 kilometers. Your trip begins in Dallas, Texas. It ends in Fort Worth, Texas. If you are planning a road trip, you might also want to calculate the total driving time from Dallas, TX to Fort Worth, TX so you can see when you'll arrive at your destination. You can also calculate the cost of driving from Dallas, TX to Fort Worth, TX based on current local fuel prices and an estimate of your car's best gas mileage.</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 40,000 evaluation samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 11 characters</li><li>mean: 34.1 characters</li><li>max: 96 characters</li></ul> | <ul><li>min: 75 characters</li><li>mean: 341.31 characters</li><li>max: 938 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:-----------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>select committees definition government</code> | <code>There are four types of congressional committees: 1 Standing committees, which continue from one Congress to the next, are probably the most important type because they consider and shape the vast majority of proposed laws. 2 Select committees are temporarily formed for specific purposes, often to study a particular issue.</code> | <code>1.0</code> |
| <code>what is a perceptual map</code> | <code>Welcome to our New Castle, Pennsylvania street map page. The street map of New Castle PA that is located below is provided by Google Maps. You can grab the New Castle Pennsylvania street map and move it around to re-centre the map. You can change between standard map view, satellite map view and hybrid map view.</code> | <code>0.0</code> |
| <code>what makes your skin feel cold and burn</code> | <code>When the wind blows in cold weather, you feel colder than the actual temperature because the air blows away heat from your skin faster. For instance, if the temperature is -17.8 Celsius (0 Fahrenheit) and the wind blows at 15 mph, it feels like -28.3 Celsius (-19 Fahrenheit) -- exposed skin can freeze in 30 minutes.</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0509 (-0.4895) | 0.2434 (-0.0816) | 0.0190 (-0.4816) | 0.1045 (-0.3509) |
| 0.0000 | 1 | 0.702 | - | - | - | - | - |
| 0.0643 | 10000 | 0.3212 | 0.1845 | 0.6628 (+0.1223) | 0.3851 (+0.0600) | 0.7245 (+0.2239) | 0.5908 (+0.1354) |
| 0.1285 | 20000 | 0.1637 | 0.1600 | 0.6902 (+0.1498) | 0.4287 (+0.1037) | 0.7385 (+0.2378) | 0.6192 (+0.1638) |
| **0.1928** | **30000** | **0.1448** | **0.1348** | **0.7166 (+0.1762)** | **0.4197 (+0.0947)** | **0.7430 (+0.2423)** | **0.6264 (+0.1711)** |
| 0.2570 | 40000 | 0.1296 | 0.1235 | 0.7022 (+0.1618) | 0.4111 (+0.0861) | 0.7192 (+0.2185) | 0.6108 (+0.1555) |
| 0.3213 | 50000 | 0.1197 | 0.1145 | 0.6887 (+0.1483) | 0.4032 (+0.0782) | 0.7460 (+0.2454) | 0.6126 (+0.1573) |
| 0.3855 | 60000 | 0.11 | 0.1077 | 0.7246 (+0.1842) | 0.4057 (+0.0807) | 0.7140 (+0.2133) | 0.6148 (+0.1594) |
| 0.4498 | 70000 | 0.1034 | 0.1054 | 0.7054 (+0.1650) | 0.4067 (+0.0817) | 0.7279 (+0.2273) | 0.6133 (+0.1580) |
| 0.5141 | 80000 | 0.0948 | 0.0893 | 0.6948 (+0.1544) | 0.4061 (+0.0810) | 0.7326 (+0.2320) | 0.6112 (+0.1558) |
| 0.5783 | 90000 | 0.0876 | 0.0846 | 0.6980 (+0.1576) | 0.4201 (+0.0951) | 0.7382 (+0.2376) | 0.6188 (+0.1634) |
| 0.6426 | 100000 | 0.0813 | 0.0803 | 0.7071 (+0.1667) | 0.4088 (+0.0838) | 0.7418 (+0.2411) | 0.6193 (+0.1639) |
| 0.7068 | 110000 | 0.0765 | 0.0757 | 0.7119 (+0.1715) | 0.3921 (+0.0671) | 0.7374 (+0.2367) | 0.6138 (+0.1584) |
| 0.7711 | 120000 | 0.0718 | 0.0683 | 0.6998 (+0.1594) | 0.3759 (+0.0508) | 0.7008 (+0.2001) | 0.5922 (+0.1368) |
| 0.8353 | 130000 | 0.0679 | 0.0694 | 0.7266 (+0.1862) | 0.3474 (+0.0224) | 0.7023 (+0.2016) | 0.5921 (+0.1367) |
| 0.8996 | 140000 | 0.0643 | 0.0727 | 0.7264 (+0.1860) | 0.3641 (+0.0391) | 0.7433 (+0.2427) | 0.6113 (+0.1559) |
| 0.9639 | 150000 | 0.0615 | 0.0612 | 0.6773 (+0.1369) | 0.3789 (+0.0539) | 0.7462 (+0.2456) | 0.6008 (+0.1455) |
| 1.0281 | 160000 | 0.0512 | 0.0645 | 0.6967 (+0.1562) | 0.3426 (+0.0175) | 0.7353 (+0.2347) | 0.5915 (+0.1361) |
| 1.0924 | 170000 | 0.0432 | 0.0617 | 0.6741 (+0.1337) | 0.3606 (+0.0356) | 0.7372 (+0.2366) | 0.5907 (+0.1353) |
| 1.1566 | 180000 | 0.0423 | 0.0624 | 0.6597 (+0.1193) | 0.3267 (+0.0016) | 0.7163 (+0.2156) | 0.5675 (+0.1122) |
| 1.2209 | 190000 | 0.0407 | 0.0578 | 0.6855 (+0.1450) | 0.3317 (+0.0066) | 0.7011 (+0.2004) | 0.5728 (+0.1174) |
| 1.2851 | 200000 | 0.0406 | 0.0530 | 0.6773 (+0.1368) | 0.3949 (+0.0699) | 0.6882 (+0.1876) | 0.5868 (+0.1314) |
| 1.3494 | 210000 | 0.0388 | 0.0560 | 0.6659 (+0.1255) | 0.3581 (+0.0331) | 0.7270 (+0.2264) | 0.5837 (+0.1283) |
| 1.4137 | 220000 | 0.038 | 0.0505 | 0.6710 (+0.1306) | 0.3679 (+0.0428) | 0.7030 (+0.2024) | 0.5806 (+0.1253) |
| 1.4779 | 230000 | 0.0374 | 0.0523 | 0.6649 (+0.1245) | 0.3602 (+0.0352) | 0.6936 (+0.1930) | 0.5729 (+0.1175) |
| 1.5422 | 240000 | 0.0359 | 0.0488 | 0.6786 (+0.1382) | 0.3716 (+0.0465) | 0.7102 (+0.2095) | 0.5868 (+0.1314) |
| 1.6064 | 250000 | 0.0343 | 0.0476 | 0.6709 (+0.1304) | 0.3907 (+0.0657) | 0.7027 (+0.2021) | 0.5881 (+0.1327) |
| 1.6707 | 260000 | 0.034 | 0.0493 | 0.6488 (+0.1084) | 0.3583 (+0.0333) | 0.6981 (+0.1975) | 0.5684 (+0.1131) |
| 1.7349 | 270000 | 0.0329 | 0.0462 | 0.6873 (+0.1468) | 0.3527 (+0.0276) | 0.6974 (+0.1968) | 0.5791 (+0.1237) |
| 1.7992 | 280000 | 0.032 | 0.0443 | 0.6657 (+0.1252) | 0.3646 (+0.0396) | 0.7018 (+0.2012) | 0.5774 (+0.1220) |
| 1.8635 | 290000 | 0.0305 | 0.0448 | 0.6660 (+0.1256) | 0.3594 (+0.0344) | 0.7223 (+0.2216) | 0.5826 (+0.1272) |
| 1.9277 | 300000 | 0.0298 | 0.0432 | 0.6713 (+0.1309) | 0.3815 (+0.0564) | 0.6878 (+0.1871) | 0.5802 (+0.1248) |
| 1.9920 | 310000 | 0.0296 | 0.0410 | 0.6472 (+0.1067) | 0.3907 (+0.0657) | 0.7104 (+0.2098) | 0.5828 (+0.1274) |
| 2.0562 | 320000 | 0.0156 | 0.0572 | 0.5978 (+0.0573) | 0.3246 (-0.0004) | 0.7005 (+0.1999) | 0.5410 (+0.0856) |
| 2.1205 | 330000 | 0.0143 | 0.0569 | 0.6302 (+0.0898) | 0.3318 (+0.0068) | 0.6832 (+0.1825) | 0.5484 (+0.0930) |
| 2.1847 | 340000 | 0.0141 | 0.0556 | 0.5810 (+0.0406) | 0.4088 (+0.0838) | 0.7054 (+0.2047) | 0.5651 (+0.1097) |
| 2.2490 | 350000 | 0.0137 | 0.0473 | 0.6491 (+0.1087) | 0.3994 (+0.0743) | 0.7180 (+0.2174) | 0.5888 (+0.1335) |
| 2.3133 | 360000 | 0.0136 | 0.0524 | 0.6171 (+0.0767) | 0.3925 (+0.0674) | 0.7071 (+0.2065) | 0.5722 (+0.1169) |
| 2.3775 | 370000 | 0.0133 | 0.0446 | 0.6065 (+0.0661) | 0.3800 (+0.0549) | 0.7328 (+0.2321) | 0.5731 (+0.1177) |
| 2.4418 | 380000 | 0.0128 | 0.0448 | 0.6336 (+0.0932) | 0.3846 (+0.0596) | 0.7093 (+0.2087) | 0.5759 (+0.1205) |
| 2.5060 | 390000 | 0.013 | 0.0445 | 0.6135 (+0.0731) | 0.3745 (+0.0495) | 0.6582 (+0.1575) | 0.5487 (+0.0934) |
| 2.5703 | 400000 | 0.0122 | 0.0451 | 0.6492 (+0.1088) | 0.3576 (+0.0326) | 0.6963 (+0.1956) | 0.5677 (+0.1123) |
| 2.6345 | 410000 | 0.0122 | 0.0473 | 0.6129 (+0.0725) | 0.3555 (+0.0305) | 0.6928 (+0.1922) | 0.5537 (+0.0984) |
| 2.6988 | 420000 | 0.0119 | 0.0488 | 0.6048 (+0.0644) | 0.3459 (+0.0209) | 0.6712 (+0.1705) | 0.5406 (+0.0852) |
| 2.7631 | 430000 | 0.012 | 0.0452 | 0.6402 (+0.0997) | 0.3499 (+0.0249) | 0.6717 (+0.1711) | 0.5539 (+0.0986) |
| 2.8273 | 440000 | 0.0115 | 0.0409 | 0.6267 (+0.0863) | 0.3349 (+0.0098) | 0.6819 (+0.1812) | 0.5478 (+0.0924) |
| 2.8916 | 450000 | 0.0108 | 0.0381 | 0.6183 (+0.0779) | 0.3546 (+0.0296) | 0.6942 (+0.1935) | 0.5557 (+0.1003) |
| 2.9558 | 460000 | 0.0103 | 0.0357 | 0.6337 (+0.0933) | 0.3595 (+0.0344) | 0.7096 (+0.2090) | 0.5676 (+0.1122) |
| 3.0201 | 470000 | 0.008 | 0.0516 | 0.6187 (+0.0783) | 0.3454 (+0.0204) | 0.6997 (+0.1990) | 0.5546 (+0.0992) |
| 3.0843 | 480000 | 0.0033 | 0.0584 | 0.6074 (+0.0669) | 0.3371 (+0.0120) | 0.6449 (+0.1443) | 0.5298 (+0.0744) |
| 3.1486 | 490000 | 0.0032 | 0.0568 | 0.5956 (+0.0552) | 0.3635 (+0.0384) | 0.6796 (+0.1789) | 0.5462 (+0.0909) |
| 3.2129 | 500000 | 0.0034 | 0.0512 | 0.5984 (+0.0580) | 0.3784 (+0.0534) | 0.7056 (+0.2050) | 0.5608 (+0.1055) |
| 3.2771 | 510000 | 0.0031 | 0.0557 | 0.5911 (+0.0506) | 0.3770 (+0.0520) | 0.6941 (+0.1935) | 0.5541 (+0.0987) |
| 3.3414 | 520000 | 0.0028 | 0.0462 | 0.6256 (+0.0852) | 0.3541 (+0.0291) | 0.7188 (+0.2181) | 0.5662 (+0.1108) |
| 3.4056 | 530000 | 0.0026 | 0.0589 | 0.5909 (+0.0505) | 0.3432 (+0.0182) | 0.6992 (+0.1986) | 0.5444 (+0.0891) |
| 3.4699 | 540000 | 0.0025 | 0.0555 | 0.6072 (+0.0668) | 0.3783 (+0.0532) | 0.6961 (+0.1954) | 0.5605 (+0.1052) |
| 3.5341 | 550000 | 0.0023 | 0.0543 | 0.5978 (+0.0573) | 0.3662 (+0.0411) | 0.6817 (+0.1811) | 0.5485 (+0.0932) |
| 3.5984 | 560000 | 0.0025 | 0.0522 | 0.5990 (+0.0586) | 0.3565 (+0.0314) | 0.6988 (+0.1982) | 0.5514 (+0.0961) |
| 3.6627 | 570000 | 0.002 | 0.0463 | 0.6031 (+0.0627) | 0.3535 (+0.0285) | 0.6682 (+0.1675) | 0.5416 (+0.0862) |
| 3.7269 | 580000 | 0.0019 | 0.0485 | 0.6239 (+0.0834) | 0.3625 (+0.0375) | 0.6832 (+0.1826) | 0.5565 (+0.1012) |
| 3.7912 | 590000 | 0.002 | 0.0465 | 0.6046 (+0.0642) | 0.3546 (+0.0296) | 0.6680 (+0.1674) | 0.5424 (+0.0871) |
| 3.8554 | 600000 | 0.0019 | 0.0450 | 0.5990 (+0.0586) | 0.3536 (+0.0286) | 0.6716 (+0.1709) | 0.5414 (+0.0860) |
| 3.9197 | 610000 | 0.0017 | 0.0434 | 0.6078 (+0.0674) | 0.3537 (+0.0286) | 0.6781 (+0.1775) | 0.5465 (+0.0912) |
| 3.9839 | 620000 | 0.0012 | 0.0430 | 0.6100 (+0.0695) | 0.3510 (+0.0260) | 0.6721 (+0.1715) | 0.5444 (+0.0890) |
| -1 | -1 | - | - | 0.7166 (+0.1762) | 0.4197 (+0.0947) | 0.7430 (+0.2423) | 0.6264 (+0.1711) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.0.0
- Transformers: 4.56.0.dev0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
nick1880/blockassist-bc-barky_powerful_falcon_1756574724
|
nick1880
| 2025-08-30T17:26:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T17:26:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756574306
|
Stasonelison
| 2025-08-30T17:19:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T17:19:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756572343
|
GroomerG
| 2025-08-30T17:15:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T17:15:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ba2han/GPT-OSS-20b-augment_
|
Ba2han
| 2025-08-30T17:05:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T16:54:49Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Ba2han
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756572048
|
ggozzy
| 2025-08-30T16:41:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T16:41:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/LatentDream-exp-beta-8b-GGUF
|
mradermacher
| 2025-08-30T16:30:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Entropicengine/LatentDream-exp-beta-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-beta-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T15:54:18Z |
---
base_model: Entropicengine/LatentDream-exp-beta-8b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Entropicengine/LatentDream-exp-beta-8b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LatentDream-exp-beta-8b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LatentDream-exp-beta-8b-GGUF/resolve/main/LatentDream-exp-beta-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
efe1903/murat1903
|
efe1903
| 2025-08-30T16:22:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"conversational",
"en",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-30T16:07:38Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>41.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
OmarAmmar02/whisper-small-ar-cv16
|
OmarAmmar02
| 2025-08-30T15:56:39Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"ar",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"region:us"
] | null | 2025-08-29T17:36:38Z |
---
datasets:
- mozilla-foundation/common_voice_16_0
language:
- ar
metrics:
- wer
base_model:
- openai/whisper-small
---
|
habanoz/qwen3-8b-finetune_mix_v1-merged_model
|
habanoz
| 2025-08-30T15:56:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T15:49:55Z |
---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** habanoz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZeroWw/Art-0-8B-GGUF
|
ZeroWw
| 2025-08-30T15:46:38Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-30T15:31:18Z |
---
license: mit
language:
- en
pipeline_tag: text-generation
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Sat Aug 30, 15:31:19
|
ntnu-smil/secret-model-stage-1-8B-512
|
ntnu-smil
| 2025-08-30T15:45:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T15:43:49Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: secret-model-stage-1-8B-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# secret-model-stage-1-8B-512
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0883
- Centroid Acc: 0.9811
- Centroid Macro F1: 0.9805
- Knn Acc: 0.9811
- Knn Macro F1: 0.9805
- Alignment: 0.4287
- Uniformity: -2.9112
- Combined Score: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Centroid Acc | Centroid Macro F1 | Knn Acc | Knn Macro F1 | Alignment | Uniformity | Combined Score |
|:-------------:|:------:|:----:|:---------------:|:------------:|:-----------------:|:-------:|:------------:|:---------:|:----------:|:--------------:|
| No log | 0 | 0 | 2.3395 | 0.6792 | 0.7021 | 0.8868 | 0.8950 | 0.3514 | -0.8454 | 0.7664 |
| 1.1369 | 3.125 | 100 | 0.8131 | 0.8679 | 0.8727 | 0.8868 | 0.8852 | 0.4878 | -2.1862 | 0.8769 |
| 0.9967 | 6.25 | 200 | 0.7616 | 0.8868 | 0.8848 | 0.8868 | 0.8905 | 0.4644 | -2.2835 | 0.8867 |
| 0.6729 | 9.375 | 300 | 0.5749 | 0.8868 | 0.8947 | 0.9057 | 0.9047 | 0.3455 | -1.9797 | 0.8980 |
| 0.2518 | 12.5 | 400 | 0.1456 | 0.9434 | 0.9438 | 0.9245 | 0.9223 | 0.4195 | -2.5864 | 0.9367 |
| 0.3085 | 15.625 | 500 | 0.2325 | 0.9811 | 0.9805 | 0.9623 | 0.9590 | 0.4281 | -2.5262 | 0.9733 |
| 0.2044 | 18.75 | 600 | 0.3032 | 0.9245 | 0.9263 | 0.9434 | 0.9438 | 0.4584 | -2.6123 | 0.9322 |
| 0.1698 | 21.875 | 700 | 0.1874 | 0.9245 | 0.9265 | 0.9245 | 0.9265 | 0.4409 | -2.7758 | 0.9265 |
| 0.0758 | 25.0 | 800 | 0.1830 | 0.9623 | 0.9634 | 0.9623 | 0.9634 | 0.4574 | -2.7934 | 0.9634 |
| 0.0758 | 25.0 | 800 | 0.1830 | 0.9623 | 0.9634 | 0.9623 | 0.9634 | 0.4574 | -2.7934 | 0.9634 |
| 0.0518 | 28.125 | 900 | 0.2130 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4700 | -2.8665 | 0.9805 |
| 0.0285 | 31.25 | 1000 | 0.2894 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4640 | -2.8205 | 0.9805 |
| 0.0594 | 34.375 | 1100 | 0.1053 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4249 | -2.7914 | 0.9805 |
| 0.0783 | 37.5 | 1200 | 0.1112 | 0.9623 | 0.9609 | 0.9623 | 0.9609 | 0.4376 | -2.8689 | 0.9609 |
| 0.0059 | 40.625 | 1300 | 0.0850 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4280 | -2.8612 | 1.0 |
| 0.0173 | 43.75 | 1400 | 0.0844 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4193 | -2.8723 | 0.9805 |
| 0.0079 | 46.875 | 1500 | 0.1321 | 0.9811 | 0.9805 | 0.9623 | 0.9612 | 0.4394 | -2.9129 | 0.9741 |
| 0.005 | 50.0 | 1600 | 0.1309 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4310 | -2.8984 | 0.9805 |
| 0.005 | 50.0 | 1600 | 0.1309 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4310 | -2.8984 | 0.9805 |
| 0.056 | 53.125 | 1700 | 0.0857 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4145 | -2.8639 | 0.9805 |
| 0.0023 | 56.25 | 1800 | 0.1039 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4322 | -2.9275 | 0.9805 |
| 0.02 | 59.375 | 1900 | 0.1086 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4378 | -2.9313 | 0.9805 |
| 0.0323 | 62.5 | 2000 | 0.0862 | 0.9811 | 0.9805 | 0.9623 | 0.9612 | 0.4163 | -2.8686 | 0.9741 |
| 0.0025 | 65.625 | 2100 | 0.0929 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4195 | -2.8867 | 0.9805 |
| 0.0023 | 68.75 | 2200 | 0.0719 | 0.9811 | 0.9805 | 1.0 | 1.0 | 0.4214 | -2.9106 | 0.9870 |
| 0.0028 | 71.875 | 2300 | 0.0942 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4291 | -2.9125 | 0.9805 |
| 0.0014 | 75.0 | 2400 | 0.0861 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4310 | -2.9212 | 0.9805 |
| 0.0014 | 75.0 | 2400 | 0.0861 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4310 | -2.9212 | 0.9805 |
| 0.0022 | 78.125 | 2500 | 0.0869 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4311 | -2.9192 | 0.9805 |
| 0.002 | 81.25 | 2600 | 0.0731 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4205 | -2.8965 | 0.9805 |
| 0.0105 | 84.375 | 2700 | 0.0787 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4277 | -2.9218 | 0.9805 |
| 0.0015 | 87.5 | 2800 | 0.0818 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4295 | -2.9185 | 0.9805 |
| 0.0015 | 90.625 | 2900 | 0.0897 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4283 | -2.9066 | 0.9805 |
| 0.002 | 93.75 | 3000 | 0.0884 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4285 | -2.9100 | 0.9805 |
| 0.0328 | 96.875 | 3100 | 0.0886 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4287 | -2.9109 | 0.9805 |
| 0.0017 | 100.0 | 3200 | 0.0883 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4287 | -2.9112 | 0.9805 |
| 0.0017 | 100.0 | 3200 | 0.0883 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4287 | -2.9112 | 0.9805 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
AliKhedr/retinal-image-classifier
|
AliKhedr
| 2025-08-30T15:36:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T13:03:13Z |
---
license: apache-2.0
---
|
bah63843/blockassist-bc-plump_fast_antelope_1756565675
|
bah63843
| 2025-08-30T14:55:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T14:55:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gagein/Qwen3-0.6B-Gensyn-Swarm-small_agile_giraffe
|
gagein
| 2025-08-30T14:36:40Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am small_agile_giraffe",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-27T20:55:44Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am small_agile_giraffe
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.