modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 00:38:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 525
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 00:38:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ripon091/blockassist-bc-gliding_domestic_lemur_1756295740
|
Ripon091
| 2025-08-27T11:56:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gliding domestic lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T11:56:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gliding domestic lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MrLvTian/Qwen3-8B-LaCo-merge-2-layer
|
MrLvTian
| 2025-08-27T11:52:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T11:44:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chainway9/blockassist-bc-untamed_quick_eel_1756292725
|
chainway9
| 2025-08-27T11:33:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T11:33:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nikki-bhati-viral-video-Clip-Orginal/New.full.videos.Nikki.bhati.Viral.Video.Official.Tutorial
|
nikki-bhati-viral-video-Clip-Orginal
| 2025-08-27T11:32:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-27T11:32:31Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756294224
|
eusuf01
| 2025-08-27T11:31:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T11:30:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
antgroup/HumanSense_Omni_Reasoning
|
antgroup
| 2025-08-27T11:27:30Z | 0 | 2 | null |
[
"safetensors",
"qwen2_5_omni",
"visual-question-answering",
"en",
"dataset:antgroup/HumanSense_Benchmark",
"arxiv:2508.10576",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:finetune:Qwen/Qwen2.5-Omni-7B",
"license:apache-2.0",
"region:us"
] |
visual-question-answering
| 2025-08-25T06:48:45Z |
---
license: apache-2.0
datasets:
- antgroup/HumanSense_Benchmark
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-Omni-7B
pipeline_tag: visual-question-answering
---
<div align="center" style="font-family: charter;">
<p align="center">
<img src="pic.png" width="400"/>
<p>
<!-- <h1></br>From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs</h1> -->
<div>
<a href="https://scholar.google.com/citations?user=sPQqpXsAAAAJ&hl=en&oi=sra">Zheng Qin<sup>1</sup></a>,
<a href="https://scholar.google.com/citations?user=S8FmqTUAAAAJ&hl=en">Ruobing Zheng<sup>*</sup><sup>2</sup></a>,
<a href="https://scholar.google.com/citations?user=3WVFdMUAAAAJ&hl=en">Yabing Wang<sup>1</sup></a>,
<a href="https://scholar.google.com/citations?user=yOtsVWQAAAAJ&hl=en&oi=sra">Tianqi Li<sup>2</sup></a>,
<a href="https://yuanyi.pub/">Yi Yuan<sup>2</sup></a>,
<a href="https://scholar.google.com/citations?hl=en&user=8SCEv-YAAAAJ&view_op=list_works&sortby=pubdate">Jingdong Chen<sup>2</sup></a>,
<a href="https://scholar.google.com/citations?user=RypRCUQAAAAJ&hl=en">Le Wang<sup>โ <dag><sup>1</sup></a> <br>
<span style="font-size: 13px; margin-top: 0.8em">
<br>
<sup>*</sup>Co-first authors. Project Lead.
<sup>โ </sup>Corresponding Author.
<br>
<sup>1</sup>Xiโan Jiaotong University. <sup>2</sup>Ant Group.
<br>
</span>
</div>
<a target="_blank" href="https://arxiv.org/abs/2508.10576" ><button><i class="ai ai-arxiv"></i> arXiv:2508.10576</button></a>
<a target="_blank" href="https://digital-avatar.github.io/ai/HumanSense/" ><button><i class="ai ai-arxiv"></i> Homepage</button></a>
<a target="_blank" href="https://github.com/antgroup/HumanSense" ><button><i class="ai ai-arxiv"></i> GitHub</button></a>
<img src="figure1.png" width="100%"/>
<p align="justify"><i>While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce <strong>HumanSense</strong>, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks. Furthermore, we argue that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, with reasoning ability serving as the key to unlocking it. Accordingly, we devise a multi-stage, modality-progressive reinforcement learning approach, resulting in <strong>HumanSense-Omni-Reasoning</strong>, which substantially enhances performance on higher-level understanding and interactive tasks. Additionally, we observe that successful reasoning processes exhibit highly consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner.
</i></p>
</div>
## Release
- `2025-08-27` :hearts: We release both the training code and dataset!
- `2025-08-27` :hearts: We released Benchmark and code!
- `2025-08-15` :rocket: We released our paper!
## Quickstart
Below, we provide simple examples to show how to use HumanSense_Omni_Reasoning with ๐ค Transformers.
```
pip uninstall transformers
pip install transformers==4.52.0
pip install accelerate
pip install qwen-omni-utils
pip install qwen-omni-utils[decord] -U
```
```python
import torch
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
model_path = "antgroup/HumanSense_Omni_Reasoning"
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="flash_attention_2",
)
model.disable_talker()
processor = Qwen2_5OmniProcessor.from_pretrained(model_path)
conversation = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/xxx.mp4",
"max_pixels": 151200
},
{
"type": "text",
"text": "xxxxxxxxxxxxxxxxxx\n"
}
],
}
]
USE_AUDIO_IN_VIDEO=True
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True,padding_side="left",add_special_tokens=False, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids = model.generate(**inputs,return_audio=False, use_audio_in_video=USE_AUDIO_IN_VIDEO)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, text_ids)
]
text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)
response = text[0]
print('*'*30)
print(response)
```
<p align="justify"><i>Examples of Reasoning: </i></p>
<img src="figure5.png" width="100%"/>
<p align="justify"><i>These cases cover four high-level perception and interaction tasks, including both video-based and audio-based questions. The reasoning processes all demonstrate thinking that integrates characteristics, emotions, and context, and then provides appropriate feedback.
</i></p>
</div>
**BibTeX:**
```
@article{qin2025humansense,
title={HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs},
author={Qin, Zheng and Zheng, Ruobing and Wang, Yabing and Li, Tianqi and Yuan, Yi and Chen, Jingdong and Wang, Le},
journal={arXiv preprint arXiv:2508.10576},
year={2025}
}
```
|
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-tgk-Cyrl
|
LumiOpen
| 2025-08-27T11:24:27Z | 0 | 0 | null |
[
"safetensors",
"xlm-roberta",
"tgk",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T11:23:22Z |
---
language:
- tgk
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Tajik classifier
## Model summary
This is a classifier for judging the educational content of Tajik (tgk-Cyrl) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
The web pages were sampled randomly from Tajik subset of the corpus.
### How to load in transformers
To load the Llama-HPLT-Edu classifier, use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-tgk-Cyrl")
model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-tgk-Cyrl")
text = "I'm non-educational web page containing nothing useful"
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
#results from a model trained with Welsh annotations
#{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1}
#{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2}
```
## Training
- Model: FacebookAI/xlm-roberta-large with a classification head
- Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits.
- Epochs: 20
- Learning Rate: 3e-4
- Evaluation Metric: F1 score
### Test Metrics
```
precision recall f1-score support
0 0.79 0.49 0.60 11736
1 0.47 0.73 0.58 9319
2 0.40 0.42 0.41 2821
3 0.38 0.12 0.19 852
4 0.36 0.02 0.03 266
5 0.00 0.00 0.00 6
accuracy 0.56 25000
macro avg 0.40 0.30 0.30 25000
weighted avg 0.61 0.56 0.55 25000
```
## Citing
Preprint coming soon. If you need to cite this work, please use the citation below:
```
@misc {llama_hplt_edu_classifiers_2025,
author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo }
title = { Llama-HPLT-edu classifiers },
year = 2025,
url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb},
publisher = { Hugging Face }
}
```
|
AltinAziziNovomind/Qwen-3-4-v1
|
AltinAziziNovomind
| 2025-08-27T11:22:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T11:22:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
esi777/blockassist-bc-camouflaged_trotting_eel_1756293616
|
esi777
| 2025-08-27T11:21:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T11:20:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-rus-Cyrl
|
LumiOpen
| 2025-08-27T11:10:55Z | 0 | 0 | null |
[
"safetensors",
"xlm-roberta",
"rus",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T11:10:00Z |
---
language:
- rus
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Russian classifier
## Model summary
This is a classifier for judging the educational content of Russian (rus-Cyrl) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
The web pages were sampled randomly from Russian subset of the corpus.
### How to load in transformers
To load the Llama-HPLT-Edu classifier, use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-rus-Cyrl")
model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-rus-Cyrl")
text = "I'm non-educational web page containing nothing useful"
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
#results from a model trained with Welsh annotations
#{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1}
#{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2}
```
## Training
- Model: FacebookAI/xlm-roberta-large with a classification head
- Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits.
- Epochs: 20
- Learning Rate: 3e-4
- Evaluation Metric: F1 score
### Test Metrics
```
precision recall f1-score support
0 0.85 0.69 0.76 10855
1 0.61 0.75 0.67 9582
2 0.46 0.53 0.49 2950
3 0.36 0.31 0.34 1028
4 0.61 0.18 0.28 547
5 0.43 0.26 0.33 38
accuracy 0.67 25000
macro avg 0.55 0.45 0.48 25000
weighted avg 0.69 0.67 0.67 25000
```
## Citing
Preprint coming soon. If you need to cite this work, please use the citation below:
```
@misc {llama_hplt_edu_classifiers_2025,
author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo }
title = { Llama-HPLT-edu classifiers },
year = 2025,
url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb},
publisher = { Hugging Face }
}
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756292842
|
Dejiat
| 2025-08-27T11:07:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T11:07:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
laurarconcepcion121/blockassist-bc-squinting_dextrous_gorilla_1756290735
|
laurarconcepcion121
| 2025-08-27T10:59:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squinting dextrous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:59:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squinting dextrous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mar-Deva
|
LumiOpen
| 2025-08-27T10:56:42Z | 0 | 0 | null |
[
"safetensors",
"xlm-roberta",
"mar",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T10:56:12Z |
---
language:
- mar
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Marathi classifier
## Model summary
This is a classifier for judging the educational content of Marathi (mar-Deva) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project.org/datasets/v2.0) and was trained on 450k [annotations](https://huggingface.co/datasets/LumiOpen/hpltv2-llama33-edu-annotation) generated by [LLama3.3-70B-instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
The web pages were sampled randomly from Marathi subset of the corpus.
### How to load in transformers
To load the Llama-HPLT-Edu classifier, use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mar-Deva")
model = AutoModelForSequenceClassification.from_pretrained("LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-mar-Deva")
text = "I'm non-educational web page containing nothing useful"
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
#results from a model trained with Welsh annotations
#{'text': "I'm non-educational web page containing nothing useful", 'score': 0.8145455718040466, 'int_score': 1}
#{'text': 'what are most common animals found in farm? there are cows, sheeps', 'score': 1.6858888864517212, 'int_score': 2}
```
## Training
- Model: FacebookAI/xlm-roberta-large with a classification head
- Dataset: 500,000 samples from Llama3.3 annotations split into 450,000 train, 25,000 validation, and 25,000 test splits.
- Epochs: 20
- Learning Rate: 3e-4
- Evaluation Metric: F1 score
### Test Metrics
```
precision recall f1-score support
0 0.85 0.49 0.62 8377
1 0.58 0.69 0.63 9709
2 0.40 0.61 0.48 3738
3 0.39 0.49 0.43 1899
4 0.68 0.32 0.44 1241
5 0.12 0.17 0.14 36
accuracy 0.58 25000
macro avg 0.50 0.46 0.46 25000
weighted avg 0.63 0.58 0.58 25000
```
## Citing
Preprint coming soon. If you need to cite this work, please use the citation below:
```
@misc {llama_hplt_edu_classifiers_2025,
author = { Tarkka, Otto, Reunamo, Akseli, Vitiugin, Fedor and Pyysalo, Sampo }
title = { Llama-HPLT-edu classifiers },
year = 2025,
url = {https://huggingface.co/collections/LumiOpen/hplt-edu-classifiers-68a85a78f9710426320e7cbb},
publisher = { Hugging Face }
}
```
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756291685
|
canoplos112
| 2025-08-27T10:49:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:48:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Egor-N/blockassist-bc-vicious_stubby_bear_1756289482
|
Egor-N
| 2025-08-27T10:35:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious stubby bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:35:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious stubby bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
david3621/blockassist-bc-gentle_meek_cat_1756288846
|
david3621
| 2025-08-27T10:16:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:15:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756288977
|
yaelahnal
| 2025-08-27T10:05:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:03:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756289088
|
Dejiat
| 2025-08-27T10:05:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T10:05:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756287767
|
yaelahnal
| 2025-08-27T09:58:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:43:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756287696
|
bah63843
| 2025-08-27T09:42:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:42:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756287339
|
Dejiat
| 2025-08-27T09:36:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:36:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1756286944
|
ypszn
| 2025-08-27T09:30:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:29:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weruopper/blockassist-bc-powerful_thick_termite_1756286682
|
weruopper
| 2025-08-27T09:25:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful thick termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:24:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful thick termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_f_hYKZBg
|
VoilaRaj
| 2025-08-27T09:21:57Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-27T09:21:17Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1756284061
|
ihsanridzi
| 2025-08-27T09:08:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T09:08:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unu-dev/roberta_s2d
|
unu-dev
| 2025-08-27T08:44:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-27T08:42:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756283946
|
liukevin666
| 2025-08-27T08:40:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T08:40:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Anthony4up/blockassist-bc-arctic_gilded_beaver_1756282103
|
Anthony4up
| 2025-08-27T08:37:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic gilded beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T08:37:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic gilded beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1756282041
|
ihsanridzi
| 2025-08-27T08:33:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T08:33:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
risesd/blockassist-bc-bipedal_exotic_cockroach_1756282950
|
risesd
| 2025-08-27T08:23:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal exotic cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T08:23:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal exotic cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756279989
|
NahedDom
| 2025-08-27T08:07:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T08:07:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joppertiu/blockassist-bc-soft_curious_camel_1756281349
|
joppertiu
| 2025-08-27T07:56:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft curious camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T07:55:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft curious camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hjambatukam/blockassist-bc-silent_bellowing_boar_1756279966
|
Hjambatukam
| 2025-08-27T07:33:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent bellowing boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T07:33:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent bellowing boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756274092
|
GroomerG
| 2025-08-27T06:20:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T06:20:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756272417
|
Loder-S
| 2025-08-27T05:51:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T05:51:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1756272309
|
aleebaster
| 2025-08-27T05:50:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T05:50:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jack-Payne1/qwen2.5-7b-instruct-good-doctor
|
Jack-Payne1
| 2025-08-27T05:36:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T05:24:32Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-good-doctor
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for qwen2.5-7b-instruct-good-doctor
This model is a fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jack-Payne1/qwen2.5-7b-instruct-good-doctor", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jacktpayne51-macquarie-university/clarifying-em/runs/7msnbqp4)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hejazizo/sft-Qwen3-0.6B_simple_prompting_2_shot_2025-08-26_23-29
|
hejazizo
| 2025-08-27T05:26:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T03:29:36Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
model_name: sft-Qwen3-0.6B_simple_prompting_2_shot_2025-08-26_23-29
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft-Qwen3-0.6B_simple_prompting_2_shot_2025-08-26_23-29
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hejazizo/sft-Qwen3-0.6B_simple_prompting_2_shot_2025-08-26_23-29", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hejazizo-ali-pytopia/sft-Qwen3-0.6B/runs/8vm70j2i)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756270049
|
rvipitkirubbe
| 2025-08-27T05:12:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T05:12:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qingy2024/GPT-OS3-Beta-8B-A3B
|
qingy2024
| 2025-08-27T05:10:45Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:qingy2024/GPT-OS3-Dataset-v1",
"base_model:AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts",
"base_model:finetune:AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T23:57:17Z |
---
base_model: AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
datasets:
- qingy2024/GPT-OS3-Dataset-v1
---
# GPT OS3 Beta 8B A3B
- **Developed by:** qingy2024
- **Base model:** AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts
GPT OSS Small (OS3) is a project to create usable and intelligent language models based on pruned GPT-OSS-20B variants by [AmanPriyanshu](https://huggingface.co/AmanPriyanshu). These are post trained with LoRA on the [qingy2024/GPT-OS3-Dataset-v1](https://huggingface.co/datasets/qingy2024/GPT-OS3-Dataset-v1) dataset to revert some of the "brain damage" due to the expert pruning.
*(This is the Beta release, step 4163 checkpoint, so please don't use it unless you know what you're doing)*
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756269494
|
liukevin666
| 2025-08-27T04:39:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T04:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/modern-cartoon
|
Muapi
| 2025-08-27T03:13:39Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-27T03:13:10Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Modern Cartoon

**Base model**: Flux.1 D
**Trained words**: Modern Cartoon
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:735477@822470", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Samas21/P3l1
|
Samas21
| 2025-08-27T02:51:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-16T13:58:09Z |
---
license: apache-2.0
---
|
jialicheng/superb-si_wav2vec2-base
|
jialicheng
| 2025-08-27T02:23:32Z | 0 | 0 | null |
[
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"region:us"
] |
audio-classification
| 2025-08-27T02:22:59Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: superb_si_42
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: superb
type: superb
config: si
split: validation
args: si
metrics:
- name: Accuracy
type: accuracy
value: 0.4074449594438007
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_si_42
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7118
- Accuracy: 0.4074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 6.1211 | 1.0 | 4324 | 6.5318 | 0.0051 |
| 5.2457 | 2.0 | 8648 | 5.4292 | 0.0377 |
| 4.4561 | 3.0 | 12972 | 4.7088 | 0.0915 |
| 3.7443 | 4.0 | 17296 | 4.1600 | 0.1596 |
| 3.3365 | 5.0 | 21620 | 3.8532 | 0.2071 |
| 3.0029 | 6.0 | 25944 | 3.3281 | 0.2820 |
| 2.6762 | 7.0 | 30268 | 3.0052 | 0.3423 |
| 2.4949 | 8.0 | 34592 | 2.9020 | 0.3718 |
| 2.3192 | 9.0 | 38916 | 2.7638 | 0.3953 |
| 2.2312 | 10.0 | 43240 | 2.7118 | 0.4074 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
OCHone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_prehistoric_lizard
|
OCHone
| 2025-08-27T01:59:26Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am powerful_prehistoric_lizard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T14:34:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am powerful_prehistoric_lizard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seraphimzzzz/694422
|
seraphimzzzz
| 2025-08-27T00:49:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-27T00:49:16Z |
[View on Civ Archive](https://civarchive.com/models/697992?modelVersionId=781054)
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756255010
|
Dejiat
| 2025-08-27T00:37:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T00:37:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ywuachr/openai-whisper-tiny-ct2
|
ywuachr
| 2025-08-27T00:19:10Z | 0 | 0 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-08-27T00:00:55Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
|
Astrall2007/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_snappy_weasel
|
Astrall2007
| 2025-08-27T00:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mammalian_snappy_weasel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T21:11:22Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mammalian_snappy_weasel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756252703
|
liukevin666
| 2025-08-26T23:59:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:59:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756251187
|
koloni
| 2025-08-26T23:58:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:58:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756248484
|
Vasya777
| 2025-08-26T22:48:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:48:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rettertop/blockassist-bc-dappled_purring_bobcat_1756248393
|
rettertop
| 2025-08-26T22:46:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled purring bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:46:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled purring bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1756245917
|
quantumxnode
| 2025-08-26T22:30:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:30:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756243408
|
rvipitkirubbe
| 2025-08-26T21:50:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:50:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-hairy_crested_fox_1756244443
|
AnerYubo
| 2025-08-26T21:40:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy crested fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:40:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy crested fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JW17/Q3-4B-Base-icrm-lam0.5-v0.1
|
JW17
| 2025-08-26T21:32:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T02:19:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gensynme/blockassist-bc-foraging_melodic_albatross_1756243448
|
gensynme
| 2025-08-26T21:24:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foraging melodic albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:24:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foraging melodic albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756243259
|
bah63843
| 2025-08-26T21:22:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:21:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756241784
|
Dejiat
| 2025-08-26T21:13:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:56:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ivanfioravanti/Qwen3-30B-A3B-Thinking-2507-fp16-4bit
|
ivanfioravanti
| 2025-08-26T21:13:46Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-30B-A3B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Thinking-2507",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-26T21:13:10Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-30B-A3B-Thinking-2507
---
# ivanfioravanti/Qwen3-30B-A3B-Thinking-2507-fp16-4bit
This model [ivanfioravanti/Qwen3-30B-A3B-Thinking-2507-fp16-4bit](https://huggingface.co/ivanfioravanti/Qwen3-30B-A3B-Thinking-2507-fp16-4bit) was
converted to MLX format from [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)
using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ivanfioravanti/Qwen3-30B-A3B-Thinking-2507-fp16-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
motza0025/blockassist-bc-nocturnal_long_leopard_1756240876
|
motza0025
| 2025-08-26T21:07:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal long leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:07:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal long leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756241186
|
ggozzy
| 2025-08-26T20:47:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:47:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Fenriclo/blockassist-bc-quiet_omnivorous_horse_1756238598
|
Fenriclo
| 2025-08-26T20:25:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet omnivorous horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:24:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet omnivorous horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/2024468
|
ultratopaz
| 2025-08-26T19:47:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T19:46:59Z |
[View on Civ Archive](https://civarchive.com/models/1726935?modelVersionId=2130533)
|
mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF
|
mradermacher
| 2025-08-26T19:40:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:OpenBuddy/openbuddy-deepseekprover-7b-v26-preview",
"base_model:quantized:OpenBuddy/openbuddy-deepseekprover-7b-v26-preview",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-26T19:06:49Z |
---
base_model: OpenBuddy/openbuddy-deepseekprover-7b-v26-preview
language:
- en
library_name: transformers
license: other
license_link: https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/LICENSE-MODEL
license_name: deepseek-prover-v2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenBuddy/openbuddy-deepseekprover-7b-v26-preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#openbuddy-deepseekprover-7b-v26-preview-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-deepseekprover-7b-v26-preview-GGUF/resolve/main/openbuddy-deepseekprover-7b-v26-preview.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NousResearch/Hermes-4-405B-FP8
|
NousResearch
| 2025-08-26T18:45:27Z | 58 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3.1",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"conversational",
"en",
"arxiv:2508.18255",
"base_model:meta-llama/Llama-3.1-405B",
"base_model:quantized:meta-llama/Llama-3.1-405B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-18T15:53:51Z |
---
language:
- en
license: llama3
tags:
- Llama-3.1
- instruct
- finetune
- reasoning
- hybrid-mode
- chatml
- function calling
- tool use
- json mode
- structured outputs
- atropos
- dataforge
- long context
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-405B
library_name: transformers
widget:
- example_title: Hermes 4
messages:
- role: system
content: >-
You are Hermes 4, a capable, neutrally-aligned assistant. Prefer concise,
correct answers.
- role: user
content: >-
Explain the difference between BFS and DFS to a new CS student.
model-index:
- name: Hermes-4-Llama-3.1-405B
results: []
---
# Hermes 4 โ Llama-3.1 405B - FP8

## Model Description
Hermes 4 405B is a frontier, hybrid-mode **reasoning** model based on Llama-3.1-405B by Nous Research that is aligned to **you**.
Read the Hermes 4 technical report here: <a href="https://arxiv.org/abs/2508.18255">Hermes 4 Technical Report</a>
Chat with Hermes in Nous Chat: https://chat.nousresearch.com
Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment.
**This is the FP8 version of Hermes 4, please see the <a href="https://huggingface.co/NousResearch/Hermes-4-405B"> BF16 Model </a> if looking for that.**
## Whatโs new vs Hermes 3
- **Post-training corpus**: Massively increased dataset size from 1M samples and 1.2B tokens to **~5M samples / ~60B tokens** blended across reasoning and non-reasoning data.
- **Hybrid reasoning mode** with explicit `<think>โฆ</think>` segments when the model decides to deliberate, and options to make your responses faster when you want.
- **Reasoning** that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses.
- **Schema adherence & structured outputs**: trained to produce valid JSON for given schemas and to repair malformed objects.
- **Much easier to steer and align**: extreme improvements on steerability, especially on reduced refusal rates.
## Our Mission: Frontier Capabilities Aligned to You
In pursuit of the mission of producing models that are open, steerable and capable of producing the full range of human expression, while being able to be aligned to your values, we created a new benchmark, RefusalBench, that tests the models willingness to be helpful in a variety of scenarios commonly disallowed by closed and open models.

Hermes 4 achieves SOTA on RefusalBench across all popular closed and open models in being helpful and conforming to your values, without censorship.
## Benchmarks (Hermes 4 405B)

> Full tables, settings, and comparisons are in the technical report.
## Prompt Format
Hermes 4 uses Llama-3-Chat format with role headers and special tags.
**Basic chat:**
```
<|start_header_id|>system<|end_header_id|>
You are Hermes 4. Be concise and helpful.<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Explain the photoelectric effect simply.<|im_end|>
<|start_header_id|>assistant<|end_header_id|>
```
### Reasoning mode
Reasoning mode can be activated with the chat template via the flag `thinking=True` or by using the following system prompt:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
Note that you can add any additional system instructions before or after this system message, and it will adjust the models policies, style, and effort of thinking, as well as its post-thinking style, format, identity, and more. You may also interleave the tool definition system message with the reasoning one.
When the model chooses to deliberate, it emits:
```
<|start_header_id|>assistant<|end_header_id|>
<think>
โฆmodelโs internal reasoning may appear hereโฆ
</think>
Final response starts hereโฆ<|eot_id|>
```
Additionally, we provide a flag to keep the content inbetween the `<think> ... </think>` that you can play with by setting `keep_cots=True`
## Function Calling & Tool Use
Hermes 4 supports function/tool calls *within* a single assistant turn, interleaved with its reasoning:
**System message (example):**
```
<|im_start|>system
You are a function-calling AI. Tools are provided inside <tools>โฆ</tools>.
When appropriate, call a tool by emitting a <tool_call>{...}</tool_call> object.
After a tool responds (as <tool_response>), continue reasoning inside <think> and produce the final answer.
<tools>
{"type":"function","function":{"name":"get_weather","description":"Get weather by city","parameters":{"type":"object","properties":{"city":{"type":"string"}},"required":["city"]}}}
</tools><|im_end|>
```
Note that you may also simply place tool definitions into the "tools:" field of your messages, and the chat template will parse and create the system prompt for you. This also works with reasoning mode for improved accuracy of tool use.
The model will then generate tool calls within `<tool_call> {tool_call} </tool_call>` tags, for easy parsing. The tool_call tags are also added tokens, so it makes it easy to parse while streaming! There are also automatic tool parsers built-in to VLLM and SGLang for Hermes, just set the tool parser in VLLM to `hermes` and in SGLang to `qwen25`.
## Inference Notes
- **Sampling defaults that work well:** `temperature=0.6, top_p=0.95, top_k=20`.
- **Template:** Use the Llama chat format for Hermes 4 70B and 405B as shown above, or set `add_generation_prompt=True` when using `tokenizer.apply_chat_template(...)`.
### Transformers example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "NousResearch/Hermes-4-Llama-3.1-405B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
messages = [
{"role":"system","content":"You are Hermes 4. Be concise."},
{"role":"user","content":"Summarize CRISPR in 3 sentences."}
]
inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
**inputs, max_new_tokens=400, temperature=0.6, top_p=0.95, top_k=20, do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For production serving on multi-GPU nodes, consider tensor parallel inference engines (e.g., SGLang/vLLM backends) with prefix caching.
## Inference Providers:
### Nous Portal:
<a href="https://portal.nousresearch.com"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/6YytY7N0mjCnBQvWo3qtv.png"></a>
### Chutes:
<a href="https://chutes.ai/app"><img width=256 alt="chutes logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/l14AWPv6cSvaprpwK_IWY.png"></a>
### Nebius:
<a href="https://nebius.com/services/studio-inference-service">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vhL0oAomFa_awBdt2KF_x.png">
<source media="(prefers-color-scheme: light)" srcset="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png">
<img width=256 alt="nebius.com logo" src="https://cdn-uploads.huggingface.co/production/uploads/64b21cbb2fc8324fcb1dac03/LjAfeFfAz8ac5rV-iiwj5.png">
</picture>
</a>
### Luminal:
<a href="https://luminalai.com/">
<img width=256 alt="luminal logo" src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/FIHsRdjMMP0HUjebiuJyH.png">
</a>
# Quantized / Smaller Variants
Hermes 4 is available as BF16 original weights as well as FP8 variants and GGUF variants by LM Studio.
BF16: https://huggingface.co/NousResearch/Hermes-4-405B
GGUF (Courtesy of LM Studio team!):
https://huggingface.co/lmstudio-community/Hermes-4-405B-GGUF
Hermes 4 is also available in smaller sizes (e.g., 70B and 14B) with similar prompt formats.
See the Hermes 4 collection to explore them all:
https://huggingface.co/collections/NousResearch/hermes-4-collection-68a731bfd452e20816725728
# How to cite
```bibtex
@misc{teknium2025hermes4technicalreport,
title={Hermes 4 Technical Report},
author={Ryan Teknium and Roger Jin and Jai Suphavadeeprasit and Dakota Mahan and Jeffrey Quesnelle and Joe Li and Chen Guang and Shannon Sands and Karan Malhotra},
year={2025},
eprint={2508.18255},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.18255},
}
```
|
LeFeujitif/sandbox
|
LeFeujitif
| 2025-08-26T18:07:47Z | 2,041 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Liberata/illustrious-xl-v1.0",
"base_model:adapter:Liberata/illustrious-xl-v1.0",
"region:us"
] |
text-to-image
| 2025-05-19T10:12:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/c61b725b-4cfd-46e9-839b-013c93ccfb15.png
base_model: Liberata/illustrious-xl-v1.0
instance_prompt: null
---
# Mixing
<Gallery />
## Model description
Just a mix of lora
## Download model
Weights for this model are available in Safetensors format.
[Download](/LeFeujitif/kgns/tree/main) them in the Files & versions tab.
|
gensynme/blockassist-bc-lumbering_tropical_aardvark_1756230994
|
gensynme
| 2025-08-26T17:57:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering tropical aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T17:56:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering tropical aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
new-uppal-farm-viral-video-link-original/full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial
|
new-uppal-farm-viral-video-link-original
| 2025-08-26T17:27:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T17:27:16Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
zpschang/PIG-Nav-NoEarlyfuse
|
zpschang
| 2025-08-26T17:23:02Z | 0 | 0 | null |
[
"arxiv:2507.17220",
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T10:41:45Z |
---
license: apache-2.0
---
This is the model for our paper [PIG-Nav: Key Insights for Pretrained Image-Goal Navigation Models](arxiv.org/abs/2507.17220).
Description of this model:
- This model (PIG-Nav-NoEarlyfuse) is the model pretrained without early fusing architecture in ViT, where other setups are kept the same as PIG-Nav. The model is trained by a total of 156K iterations with a batch size of 128.
- Please follow our github repo for detailed use.
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756226618
|
NahedDom
| 2025-08-26T17:15:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T17:15:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/Nomic-Embed-Text
|
qualcomm
| 2025-08-26T16:51:45Z | 15 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2025-03-13T22:54:07Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: text-generation
---

# Nomic-Embed-Text: Optimized for Mobile Deployment
## Resizable Production Embeddings
A text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks.
This model is an implementation of Nomic-Embed-Text found [here](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).
This repository provides scripts to run Nomic-Embed-Text on Qualcommยฎ devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/nomic_embed_text).
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Model checkpoint: v1.5
- Input resolution: 1x128 (seqlen can vary)
- Number of parameters: 137M
- Model size (float): 523 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Nomic-Embed-Text | float | QCS8275 (Proxy) | Qualcommยฎ QCS8275 (Proxy) | TFLITE | 31.651 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8275 (Proxy) | Qualcommยฎ QCS8275 (Proxy) | QNN_DLC | 28.185 ms | 0 - 361 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS8450 (Proxy) | Qualcommยฎ QCS8450 (Proxy) | TFLITE | 10.867 ms | 0 - 372 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8450 (Proxy) | Qualcommยฎ QCS8450 (Proxy) | QNN_DLC | 10.794 ms | 0 - 371 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS8550 (Proxy) | Qualcommยฎ QCS8550 (Proxy) | TFLITE | 8.779 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8550 (Proxy) | Qualcommยฎ QCS8550 (Proxy) | QNN_DLC | 7.292 ms | 0 - 25 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS9075 (Proxy) | Qualcommยฎ QCS9075 (Proxy) | TFLITE | 11.131 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS9075 (Proxy) | Qualcommยฎ QCS9075 (Proxy) | QNN_DLC | 9.688 ms | 0 - 363 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA7255P ADP | Qualcommยฎ SA7255P | TFLITE | 31.651 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA7255P ADP | Qualcommยฎ SA7255P | QNN_DLC | 28.185 ms | 0 - 361 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8255 (Proxy) | Qualcommยฎ SA8255P (Proxy) | TFLITE | 8.813 ms | 3 - 29 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8255 (Proxy) | Qualcommยฎ SA8255P (Proxy) | QNN_DLC | 7.474 ms | 0 - 23 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8295P ADP | Qualcommยฎ SA8295P | TFLITE | 12.375 ms | 0 - 358 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8295P ADP | Qualcommยฎ SA8295P | QNN_DLC | 10.607 ms | 0 - 356 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8650 (Proxy) | Qualcommยฎ SA8650P (Proxy) | TFLITE | 8.839 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8650 (Proxy) | Qualcommยฎ SA8650P (Proxy) | QNN_DLC | 7.423 ms | 0 - 23 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8775P ADP | Qualcommยฎ SA8775P | TFLITE | 11.131 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8775P ADP | Qualcommยฎ SA8775P | QNN_DLC | 9.688 ms | 0 - 363 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragonยฎ 8 Gen 2 Mobile | TFLITE | 8.77 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragonยฎ 8 Gen 2 Mobile | QNN_DLC | 7.484 ms | 0 - 27 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragonยฎ 8 Gen 2 Mobile | ONNX | 8.07 ms | 0 - 25 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragonยฎ 8 Gen 3 Mobile | TFLITE | 6.405 ms | 0 - 370 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragonยฎ 8 Gen 3 Mobile | QNN_DLC | 5.308 ms | 0 - 372 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragonยฎ 8 Gen 3 Mobile | ONNX | 5.876 ms | 0 - 377 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite Mobile | TFLITE | 6.247 ms | 0 - 365 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite Mobile | QNN_DLC | 4.962 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite Mobile | ONNX | 5.442 ms | 0 - 330 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Snapdragon X Elite CRD | Snapdragonยฎ X Elite | QNN_DLC | 7.997 ms | 1522 - 1522 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Snapdragon X Elite CRD | Snapdragonยฎ X Elite | ONNX | 9.472 ms | 264 - 264 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[nomic-embed-text]"
```
## Configure Qualcommยฎ AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcommยฎ AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcommยฎ ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.nomic_embed_text.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.nomic_embed_text.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcommยฎ
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.nomic_embed_text.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/nomic_embed_text/qai_hub_models/models/Nomic-Embed-Text/export.py)
leverages [Qualcommยฎ AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.nomic_embed_text import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcommยฎ
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.nomic_embed_text.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.nomic_embed_text.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcommยฎ AI Hub
Get more details on Nomic-Embed-Text's performance across various devices [here](https://aihub.qualcomm.com/models/nomic_embed_text).
Explore all available models on [Qualcommยฎ AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Nomic-Embed-Text can be found
[here](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Introducing Nomic Embed: A Truly Open Embedding Model](https://www.nomic.ai/blog/posts/nomic-embed-text-v1)
* [Source Model Implementation](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
Hemlok/LizMix
|
Hemlok
| 2025-08-26T16:36:34Z | 0 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"art",
"ja",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-02-11T19:04:14Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- ja
tags:
- stable-diffusion
- text-to-image
- art
---
# โLizMix

- SakuMixใใผในใฎใขใใกๅใใใผใธใขใใซใ
----
# โDiscord
[Join Discord Server](https://discord.gg/eN6aSWRddT)
- Hemlokใฎใใผใธใณใใฅใใใฃใงใใใฌใทใใจใ่ฃ่ฉฑใฏใใกใใ
----
# โใขใใซๆฆ่ฆ
## V1
- Sampler: DPM++ 3M SDE Karras or DPM++ 2M SDE Karras ๆจๅฅจใ
- Steps: 20-
- Clipskip: 2
- CFG Scale: 5-12
- Denoise strength: 0.6
- ใฏใชใชใใฃใฟใฐ(masterpiece,best quality็ญ)ใฏๅ
ฅใใใจใใ็ตตๆใๅฎๅฎใใพใใ
- ๅฅ้embeddingsใใใใใใใพใใ
## V2
- Sampler: DPM++ 2M Karras ๆจๅฅจใDPM++ 2M SDE Karrasใฏไธๅฎๅฎใ
- Steps: 20-
- Clipskip: 2
- CFG Scale: 5-8 (ในใฑใผใซใ้ซใใใใจ็ตตๆใๅบๅฎใใใพใ)
- Denoise strength: 0.6
- ใฏใชใชใใฃใฟใฐใฏๆซๅฐพใซๅ
ฅใใฆใใ ใใใ
- ใใฌใใฃใใชใใงใใใใพใใ
----
# โใตใณใใซ

- Prompt:
```
1girl, solo, teen, cowboy shot, (depth of field:1.2), (night), (long coat), downtown, (street light:1.1), (Fantastic lighting), looking at viewer, black hair, long hair, [smile], (Closed mouth),
best quality, 4K, ultra detailed CG, highres, source anime, newest
```
---

- Prompt:
```
1girl, solo, full body, (fantasy), (dark:1.2), (depth of field:1.2), (night), (Fantastic lighting), looking at viewer, white hair, long hair,
best quality, 4K, ultra detailed CG, highres, source anime, newest
```
---

- Prompt:
```
1girl, solo, cowboy shot, long white hair, glossy, (Gothic Lolita dress), Gorgeous Clothing, clothes that reveal little, [cute smile], in room,
best quality, 4K, ultra detailed CG, highres, source anime, newest
```
---
# โใขใใซใฎไฝฟใๆน
- ใขใใซใใใฆใณใญใผใใใฆWebUI็ญใงใไฝฟ็จใใ ใใใ
- ใขใใซใฏModelsใใฉใซใใฎไธญใซใใใพใใ
----
# ๅ
่ฒฌไบ้
- SFWใใใณNSFW็ปๅใฎไฝๆใฏใๅใ
ใฎใฏใชใจใคใฟใผใฎๅคๆญใซใใใพใใใขใใซ่ฃฝไฝ่
ใฏ่ฒฌไปปใ่ฒ ใใพใใใ
- ใใฎใขใใซใฏใๅ
ฌๅ
ฑใฎๅ ดใชใฉใงNSFWใณใณใใณใใๅ
ฌ้ใใใใใซไฝใใใใขใใซใงใฏใใใพใใใ
----
# ใฉใคใปใณใน
- ใใฎใขใใซใฏFair AI Public License 1.0-SDใงๆจฉๅฉใจไฝฟ็จๆนๆณใ่ฆๅฎใใใฆใใพใใ
- ใฉใคใปใณในใฎๅ
จๆใฏไปฅไธใฎใชใณใฏใใ่ชญใฟใใ ใใใ
[https://freedevproject.org/faipl-1.0-sd/](https://freedevproject.org/faipl-1.0-sd/)
|
alok0777/blockassist-bc-masked_pensive_lemur_1756225393
|
alok0777
| 2025-08-26T16:25:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T16:24:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756223549
|
Dejiat
| 2025-08-26T15:52:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T15:52:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756223396
|
ggozzy
| 2025-08-26T15:51:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T15:50:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anikifoss/DeepSeek-V3.1-HQ4_K
|
anikifoss
| 2025-08-26T15:50:11Z | 0 | 0 | null |
[
"gguf",
"mla",
"conversational",
"ik_llama.cpp",
"text-generation",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T19:35:27Z |
---
quantized_by: anikifoss
pipeline_tag: text-generation
base_model: deepseek-ai/DeepSeek-V3.1
license: mit
base_model_relation: quantized
tags:
- mla
- conversational
- ik_llama.cpp
---
High quality quantization of **DeepSeek-V3.1** without using imatrix.
The architecture has not changed, so token generation speed should be the same as DeepSeek-R1-0528, see benchmarks [here](https://huggingface.co/anikifoss/DeepSeek-R1-0528-DQ4_K_R4#prompt-processing).
# Run
## ik_llama.cpp
See [this detailed guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/258) on how to setup ik_llama and how to make custom quants.
```
./build/bin/llama-server \
--alias anikifoss/DeepSeek-V3.1-HQ4_K \
--model /home/gamer/Env/models/anikifoss/DeepSeek-V3.1-HQ4_K/DeepSeek-V3.1-HQ4_K-00001-of-00010.gguf \
--no-mmap \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
--ctx-size 82000 \
-ctk f16 \
-mla 3 -fa \
-amb 512 \
-b 1024 -ub 1024 \
-fmoe \
--n-gpu-layers 99 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--host 127.0.0.1 \
--port 8090
```
## llama.cpp
You can turn on thinking by changing `"thinking": false` to `"thinking": true` below.
Currently `llama.cpp` does not return `<think>` token in response. If you know how to fix that, please share in the "Community" section!
As a workaround, to inject the <think> token in OpenWebUI, you can use the [inject_think_token_filter.txt](https://huggingface.co/anikifoss/DeepSeek-V3.1-HQ4_K/blob/main/inject_think_token_filter.txt) code included in the repository. You can add filters via `Admin Panel` -> `Functions` -> `Filter` -> `+ button on the right`
```
./build/bin/llama-server \
--alias anikifoss/DeepSeek-V3.1-HQ4_K \
--model /home/gamer/Env/models/anikifoss/DeepSeek-V3.1-HQ4_K/DeepSeek-V3.1-HQ4_K-00001-of-00010.gguf \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
--ctx-size 64000 \
-ctk f16 \
-fa \
--chat-template-kwargs '{"thinking": false }' \
-b 1024 -ub 1024 \
--n-gpu-layers 99 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--jinja \
--host 127.0.0.1 \
--port 8090
```
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756222087
|
2hpsatt
| 2025-08-26T15:29:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T15:29:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756220957
|
Dejiat
| 2025-08-26T15:09:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T15:09:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Thireus/Qwen3-4B-Instruct-2507-THIREUS-IQ1_KT-SPECIAL_SPLIT
|
Thireus
| 2025-08-26T14:47:31Z | 3 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-25T20:21:24Z |
---
license: mit
---
# Qwen3-4B-Instruct-2507
## ๐ค What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Instruct-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Instruct-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507). These GGUF shards are designed to be used with **Thireusโ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization โrecipesโ effortlessly.
- ๐ Read more: https://github.com/Thireus/GGUF-Tool-Suite
- ๐ Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- ๐ ๏ธ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- ๐ Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Qwen3-4B-Instruct-2507/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_llama.cpp_recipes/Qwen3-4B-Instruct-2507.ROOT-4.2498bpw-10.9335ppl.1GB-GGUF_0GB-GPU_1GB-CPU.9888e4b_9193781.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m Qwen3-4B-Instruct-2507-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00399.gguf \
-fa -amb 1024 -ctk q8_0 -c 32768 -ngl 99 \
-b 4096 -ub 4096 --warmup-batch --no-mmap --threads 1 \
--main-gpu 0
```
</details>
---
## โ Why does this Tool Suite exist?
1. **Compatibility & Speed** โ [unsloth](https://huggingface.co/unsloth)โs dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** โ No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** โ To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) targetโso I created one with excellent results!
---
## ๐ How does it compare to other GGUFs?
Hereโs how Qwen3-4B-Instruct-2507 quantized with **Thireusโ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you โ just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## ๐ How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) โ focus on these sections:
1. โ ๏ธ **Requirements** โ Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. ๐ฅ **Download Model Shards** โ Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. ๐ง **Run a Downloaded Model** โ Sample usage with `llama-cli`.
4. ๐ ๏ธ **Generate a Custom Recipe** โ Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## โ
Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## ๐คทโโ๏ธ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each userโs hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who donโt trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## ๐ฆ Whatโs in this repository?
- **00001 GGUF header shard** โ Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** โ Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** โ `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** โ Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authorsโor alternatively self-quantizeโto avoid potential exploits.
---
## ๐ก Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! ๐
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756219332
|
ggozzy
| 2025-08-26T14:43:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T14:43:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Thireus/Qwen3-4B-Instruct-2507-THIREUS-IQ4_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-26T14:33:58Z | 2 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-25T20:22:07Z |
---
license: mit
---
# Qwen3-4B-Instruct-2507
## ๐ค What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Instruct-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Instruct-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507). These GGUF shards are designed to be used with **Thireusโ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization โrecipesโ effortlessly.
- ๐ Read more: https://github.com/Thireus/GGUF-Tool-Suite
- ๐ Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- ๐ ๏ธ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- ๐ Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Qwen3-4B-Instruct-2507/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_llama.cpp_recipes/Qwen3-4B-Instruct-2507.ROOT-4.2498bpw-10.9335ppl.1GB-GGUF_0GB-GPU_1GB-CPU.9888e4b_9193781.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m Qwen3-4B-Instruct-2507-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00399.gguf \
-fa -amb 1024 -ctk q8_0 -c 32768 -ngl 99 \
-b 4096 -ub 4096 --warmup-batch --no-mmap --threads 1 \
--main-gpu 0
```
</details>
---
## โ Why does this Tool Suite exist?
1. **Compatibility & Speed** โ [unsloth](https://huggingface.co/unsloth)โs dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** โ No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** โ To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) targetโso I created one with excellent results!
---
## ๐ How does it compare to other GGUFs?
Hereโs how Qwen3-4B-Instruct-2507 quantized with **Thireusโ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you โ just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## ๐ How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) โ focus on these sections:
1. โ ๏ธ **Requirements** โ Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. ๐ฅ **Download Model Shards** โ Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. ๐ง **Run a Downloaded Model** โ Sample usage with `llama-cli`.
4. ๐ ๏ธ **Generate a Custom Recipe** โ Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## โ
Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## ๐คทโโ๏ธ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each userโs hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who donโt trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## ๐ฆ Whatโs in this repository?
- **00001 GGUF header shard** โ Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** โ Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** โ `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** โ Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authorsโor alternatively self-quantizeโto avoid potential exploits.
---
## ๐ก Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! ๐
|
rayhanfa/large-v3-rra-id-26aug
|
rayhanfa
| 2025-08-26T14:28:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:stt-project-rra/golden-dataset-1.0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-26T07:57:20Z |
---
library_name: transformers
language:
- id
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- stt-project-rra/golden-dataset-1.0
metrics:
- wer
model-index:
- name: Whisper Large v2 - 1.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: stt-project-rra/golden-dataset-1.0
type: stt-project-rra/golden-dataset-1.0
args: 'config: id'
metrics:
- name: Wer
type: wer
value: 9.863164202956453
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v2 - 1.0
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the stt-project-rra/golden-dataset-1.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2037
- Wer: 9.8632
- Cer: 5.5536
- Wer Raw: 17.7249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 680
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Wer Raw |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.1782 | 0.4945 | 850 | 0.1921 | 10.5357 | 5.9810 | 19.2794 |
| 0.1415 | 0.9889 | 1700 | 0.1804 | 9.9048 | 5.5925 | 18.0738 |
| 0.1279 | 1.4834 | 2550 | 0.1802 | 9.7500 | 5.5060 | 17.7690 |
| 0.0972 | 1.9779 | 3400 | 0.1808 | 9.8582 | 5.6268 | 17.8350 |
| 0.0832 | 2.4724 | 4250 | 0.1904 | 9.7050 | 5.4054 | 17.5201 |
| 0.0877 | 2.9668 | 5100 | 0.1882 | 9.5835 | 5.3449 | 17.3016 |
| 0.0566 | 3.4613 | 5950 | 0.2037 | 9.8815 | 5.5582 | 17.6877 |
| 0.0545 | 3.9558 | 6800 | 0.2037 | 9.8632 | 5.5536 | 17.7249 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
nimmytio/blockassist-bc-tiny_fierce_bee_1756209441
|
nimmytio
| 2025-08-26T11:58:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tiny fierce bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T11:57:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tiny fierce bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nohobby/SDXL_merges
|
Nohobby
| 2025-08-26T11:41:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-05-16T16:08:13Z |
https://civitai.com/models/1665706?modelVersionId=1885360
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756199535
|
Sayemahsjn
| 2025-08-26T09:30:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T09:30:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MANSTAGE/self-analysis-module
|
MANSTAGE
| 2025-08-26T08:37:15Z | 0 | 1 | null |
[
"dataset:MANSTAGE/analysis-datasets",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF",
"region:us"
] | null | 2025-08-23T07:16:10Z |
---
datasets:
- MANSTAGE/analysis-datasets
base_model:
- unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF
---
|
AnonymousCS/populism_classifier_144
|
AnonymousCS
| 2025-08-26T06:12:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_multilingual_bert_cased_v2",
"base_model:finetune:AnonymousCS/populism_multilingual_bert_cased_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-26T06:07:26Z |
---
library_name: transformers
license: apache-2.0
base_model: AnonymousCS/populism_multilingual_bert_cased_v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_144
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_144
This model is a fine-tuned version of [AnonymousCS/populism_multilingual_bert_cased_v2](https://huggingface.co/AnonymousCS/populism_multilingual_bert_cased_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2996
- Accuracy: 0.9904
- 1-f1: 0.8224
- 1-recall: 0.7719
- 1-precision: 0.88
- Balanced Acc: 0.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3457 | 1.0 | 62 | 0.2102 | 0.9838 | 0.6981 | 0.6491 | 0.7551 | 0.8214 |
| 0.0606 | 2.0 | 124 | 0.2551 | 0.9868 | 0.7547 | 0.7018 | 0.8163 | 0.8485 |
| 0.0061 | 3.0 | 186 | 0.2000 | 0.9889 | 0.8070 | 0.8070 | 0.8070 | 0.9006 |
| 0.0005 | 4.0 | 248 | 0.2225 | 0.9924 | 0.8598 | 0.8070 | 0.92 | 0.9025 |
| 0.0016 | 5.0 | 310 | 0.1739 | 0.9889 | 0.8036 | 0.7895 | 0.8182 | 0.8921 |
| 0.057 | 6.0 | 372 | 0.3100 | 0.9919 | 0.8462 | 0.7719 | 0.9362 | 0.8852 |
| 0.1151 | 7.0 | 434 | 0.2996 | 0.9904 | 0.8224 | 0.7719 | 0.88 | 0.8844 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756180754
|
indoempatnol
| 2025-08-26T04:26:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T04:26:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF
|
mradermacher
| 2025-08-25T22:39:57Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"cot",
"deepseek",
"Qwen2.5",
"Hermes",
"DeepHermes",
"128k context",
"fine tune",
"merge",
"uncensored",
"abliterated",
"en",
"base_model:DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B",
"base_model:quantized:DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T11:45:40Z |
---
base_model: DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- cot
- deepseek
- Qwen2.5
- Hermes
- DeepHermes
- 128k context
- fine tune
- merge
- uncensored
- abliterated
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q2_K.gguf) | Q2_K | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q3_K_S.gguf) | Q3_K_S | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q3_K_M.gguf) | Q3_K_M | 9.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q3_K_L.gguf) | Q3_K_L | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.IQ4_XS.gguf) | IQ4_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q4_K_S.gguf) | Q4_K_S | 11.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q4_K_M.gguf) | Q4_K_M | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q5_K_S.gguf) | Q5_K_S | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q5_K_M.gguf) | Q5_K_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q6_K.gguf) | Q6_K | 15.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-GGUF/resolve/main/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B.Q8_0.gguf) | Q8_0 | 20.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hagretopish/blockassist-bc-prickly_lithe_alpaca_1756158200
|
hagretopish
| 2025-08-25T21:43:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly lithe alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:43:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly lithe alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/swing27_14_31_4
|
WenFengg
| 2025-08-25T03:34:08Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-07-31T09:49:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
codexistent/q-FrozenLake-v1-4x4-noSlippery
|
codexistent
| 2025-08-25T02:11:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-25T02:11:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="codexistent/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-500-v3
|
ibrahimbukhariLingua
| 2025-06-06T07:19:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:09:31Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-500-v3
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-500-v3
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-500-v3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Avishek8136/deepseek_lora_model-8bit
|
Avishek8136
| 2025-06-06T07:19:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:deepseek-ai/deepseek-llm-7b-base",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-06T07:14:18Z |
---
base_model: deepseek-ai/deepseek-llm-7b-base
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Avishek8136
- **License:** apache-2.0
- **Finetuned from model :** deepseek-ai/deepseek-llm-7b-base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_666
|
luckeciano
| 2025-06-06T07:17:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-06T02:32:37Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-NoAdvNorm_666
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-NoAdvNorm_666
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_666", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/qdmnm0ck)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NanEi/llama-3.2-3b-it-Ecommerce-NEEK-ChatBot
|
NanEi
| 2025-06-06T07:15:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-06T07:14:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/cf_badmed_kl_divergence_1.0_seed_1
|
gradientrouting-spar
| 2025-06-06T07:13:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-06T07:12:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LinaSad/mcqa_lora_30k_5e_4_
|
LinaSad
| 2025-06-06T07:12:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-06T07:12:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.