modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 12:28:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 543
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 12:27:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
otsu11/blockassist-bc-screeching_squeaky_cod_1754848734
|
otsu11
| 2025-08-10T17:59:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching squeaky cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:59:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching squeaky cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754847252
|
Sayemahsjn
| 2025-08-10T17:51:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:51:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1754848022
|
pietro0hz
| 2025-08-10T17:49:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious toothy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:48:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious toothy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kristysimon87/Update.New.full.videos.samiya.hijab.tiktok.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:37:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:37:12Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kulhad-pizza-jalandhar-viral-video-clip/New.full.videos.kulhad.pizza.jalandhar.Viral.Video.Official.Tutorial
|
kulhad-pizza-jalandhar-viral-video-clip
| 2025-08-10T17:23:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:23:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
amos1088/gpt-oss-20b-relevance-beta1-20250810_184741
|
amos1088
| 2025-08-10T16:47:44Z | 0 | 0 | null |
[
"safetensors",
"document-relevance",
"dpo",
"gpt-oss-20b",
"dataset:custom-relevance-dataset",
"model-index",
"region:us"
] | null | 2025-08-10T16:47:41Z |
---
tags:
- document-relevance
- dpo
- gpt-oss-20b
datasets:
- custom-relevance-dataset
metrics:
- accuracy
model-index:
- name: gpt-oss-20b-relevance-beta1-20250810_184741
results:
- task:
type: text-classification
name: Document Relevance Classification
metrics:
- type: accuracy
value: 0.6510
name: Validation Accuracy
- type: yes_ratio
value: 0.1562
name: Yes Prediction Ratio
- type: no_ratio
value: 0.8438
name: No Prediction Ratio
---
# gpt-oss-20b Document Relevance Classifier
This model was trained using DPO (Direct Preference Optimization) for document relevance classification.
## Training Configuration
- Base Model: openai/gpt-oss-20b
- DPO Beta: 1.0
- Learning Rate: 2e-05
- Batch Size: 32
- Epochs: 3
- Training Samples: 1720
- Validation Samples: 192
## Performance Metrics
- **Accuracy**: 65.10%
- **Yes Predictions**: 15.6%
- **No Predictions**: 84.4%
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
model = AutoModelForCausalLM.from_pretrained("openai/gpt-oss-20b")
tokenizer = AutoTokenizer.from_pretrained("openai/gpt-oss-20b")
# Load adapter
model = PeftModel.from_pretrained(model, "amos1088/gpt-oss-20b-relevance-beta1-20250810_184741")
```
## Training Date
2025-08-10 18:47:41 UTC
|
FrancescoCaracciolo/Kurisu-RVC
|
FrancescoCaracciolo
| 2025-08-10T16:42:46Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2025-08-10T16:30:33Z |
---
license: gpl-3.0
---
Kurisu Makise's voice changer model based on some Japanese Voice actor audios in the Steins;Gate visual novel.
|
SOTAMak1r/DeepVerse1.1
|
SOTAMak1r
| 2025-08-10T16:21:53Z | 0 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"autoregressive",
"video_generation",
"world_model",
"image-to-video",
"arxiv:2506.01103",
"license:mit",
"region:us"
] |
image-to-video
| 2025-08-07T07:05:29Z |
---
license: mit
pipeline_tag: image-to-video
tags:
- autoregressive
- video_generation
- world_model
---
# DeepVerse: 4D Autoregressive Video Generation as a World Model
Please following the instructions from [github repo](https://github.com/SOTAMak1r/DeepVerse)
## 🐳 Citation
```
@article{chen2025deepverse,
title={DeepVerse: 4D Autoregressive Video Generation as a World Model},
author={Chen, Junyi and Zhu, Haoyi and He, Xianglong and Wang, Yifan and Zhou, Jianjun and Chang, Wenzheng and Zhou, Yang and Li, Zizun and Fu, Zhoujie and Pang, Jiangmiao and others},
journal={arXiv preprint arXiv:2506.01103},
year={2025}
}
```
|
kaisarfauzan/blockassist-bc-lanky_deadly_gazelle_1754841953
|
kaisarfauzan
| 2025-08-10T16:08:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky deadly gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T16:07:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky deadly gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
P-Karthik-Mohan/imdb-distilbert
|
P-Karthik-Mohan
| 2025-08-10T16:07:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-10T16:07:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mohitskaushal/phi-4-mini-it-4b-8bit-original-without-ft
|
mohitskaushal
| 2025-08-10T16:00:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-10T15:59:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hzz03/lyna
|
hzz03
| 2025-08-10T15:57:21Z | 0 | 0 | null |
[
"pytorch",
"llama",
"license:other",
"region:us"
] | null | 2025-08-10T15:40:08Z |
---
license: other
license_name: deepseek-license
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### 1)Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### 2)Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """<|fim▁begin|>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<|fim▁hole|>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
```
#### 3)Repository Level Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """#utils.py
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
def load_data():
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Standardize the data
scaler = StandardScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Convert numpy data to PyTorch tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.int64)
y_test = torch.tensor(y_test, dtype=torch.int64)
return X_train, X_test, y_train, y_test
def evaluate_predictions(y_test, y_pred):
return accuracy_score(y_test, y_pred)
#model.py
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
class IrisClassifier(nn.Module):
def __init__(self):
super(IrisClassifier, self).__init__()
self.fc = nn.Sequential(
nn.Linear(4, 16),
nn.ReLU(),
nn.Linear(16, 3)
)
def forward(self, x):
return self.fc(x)
def train_model(self, X_train, y_train, epochs, lr, batch_size):
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(self.parameters(), lr=lr)
# Create DataLoader for batches
dataset = TensorDataset(X_train, y_train)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
for epoch in range(epochs):
for batch_X, batch_y in dataloader:
optimizer.zero_grad()
outputs = self(batch_X)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
def predict(self, X_test):
with torch.no_grad():
outputs = self(X_test)
_, predicted = outputs.max(1)
return predicted.numpy()
#main.py
from utils import load_data, evaluate_predictions
from model import IrisClassifier as Classifier
def main():
# Model training and evaluation
"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=140)
print(tokenizer.decode(outputs[0]))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754838064
|
afasdfdfadsf
| 2025-08-10T15:02:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T15:01:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tatjr13/Qwen3-0.6B-Gensyn-Swarm-vicious_untamed_grasshopper
|
tatjr13
| 2025-08-10T14:48:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vicious_untamed_grasshopper",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T06:55:30Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vicious_untamed_grasshopper
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CycloneDX/cdx1-mlx-6bit
|
CycloneDX
| 2025-08-10T14:46:43Z | 13 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"conversational",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
] |
text-generation
| 2025-02-07T10:51:09Z |
---
base_model: unsloth/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: mlx
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- mlx
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
Masabanees619/gemma-2-2b_adaptor_jailbreak
|
Masabanees619
| 2025-08-10T14:44:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T14:44:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CycloneDX/cdx1-14B-BF16-GGUF
|
CycloneDX
| 2025-08-10T14:44:25Z | 250 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen2",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-08T16:57:21Z |
---
base_model: unsloth/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
rbelanec/train_apps_1754507525
|
rbelanec
| 2025-08-10T14:24:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T06:23:54Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_apps_1754507525
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_apps_1754507525
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the apps dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6625
- Num Input Tokens Seen: 880041568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.7099 | 0.5000 | 13189 | 0.7341 | 44223136 |
| 0.6547 | 1.0000 | 26378 | 0.7023 | 87957952 |
| 0.6593 | 1.5001 | 39567 | 0.6880 | 131814656 |
| 0.5829 | 2.0001 | 52756 | 0.6804 | 175975840 |
| 0.596 | 2.5001 | 65945 | 0.6759 | 219881664 |
| 0.637 | 3.0001 | 79134 | 0.6724 | 263949472 |
| 0.6751 | 3.5001 | 92323 | 0.6702 | 307925280 |
| 0.5969 | 4.0002 | 105512 | 0.6682 | 352048320 |
| 0.5763 | 4.5002 | 118701 | 0.6669 | 396106880 |
| 0.6231 | 5.0002 | 131890 | 0.6656 | 440014752 |
| 0.7088 | 5.5002 | 145079 | 0.6649 | 484066880 |
| 0.6676 | 6.0002 | 158268 | 0.6641 | 528105600 |
| 0.8398 | 6.5002 | 171457 | 0.6636 | 572089824 |
| 0.5797 | 7.0003 | 184646 | 0.6631 | 616130592 |
| 0.7106 | 7.5003 | 197835 | 0.6630 | 660063168 |
| 0.6742 | 8.0003 | 211024 | 0.6627 | 704033600 |
| 0.7597 | 8.5003 | 224213 | 0.6626 | 747976128 |
| 0.6676 | 9.0003 | 237402 | 0.6625 | 792077152 |
| 0.575 | 9.5004 | 250591 | 0.6625 | 836063392 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754835274
|
afasdfdfadsf
| 2025-08-10T14:18:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:15:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-ravenous_stubby_flea_1754835345
|
llencia
| 2025-08-10T14:16:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous stubby flea",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:16:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous stubby flea
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1754834904
|
MajorJalud
| 2025-08-10T14:10:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:10:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Midni4ht/model1
|
Midni4ht
| 2025-08-10T13:49:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:Midni4ht/model",
"base_model:quantized:Midni4ht/model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T13:44:43Z |
---
base_model: Midni4ht/model
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Midni4ht
- **License:** apache-2.0
- **Finetuned from model :** Midni4ht/model
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
0xSonia/blockassist-bc-hunting_grassy_gecko_1754832686
|
0xSonia
| 2025-08-10T13:32:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hunting grassy gecko",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:32:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hunting grassy gecko
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF
|
mradermacher
| 2025-08-10T13:21:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:rubricreward/R3-Qwen2.5-7B-LoRA-14k",
"base_model:quantized:rubricreward/R3-Qwen2.5-7B-LoRA-14k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T11:01:31Z |
---
base_model: rubricreward/R3-Qwen2.5-7B-LoRA-14k
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/rubricreward/R3-Qwen2.5-7B-LoRA-14k
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#R3-Qwen2.5-7B-LoRA-14k-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen2.5-7B-LoRA-14k-GGUF/resolve/main/R3-Qwen2.5-7B-LoRA-14k.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF
|
mradermacher
| 2025-08-10T13:15:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:rubricreward/R3-Qwen3-4B-LoRA-14k",
"base_model:quantized:rubricreward/R3-Qwen3-4B-LoRA-14k",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-10T10:43:59Z |
---
base_model: rubricreward/R3-Qwen3-4B-LoRA-14k
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/rubricreward/R3-Qwen3-4B-LoRA-14k
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#R3-Qwen3-4B-LoRA-14k-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RoyArkh/Test1-EleutherAI-gpt-neo-125m_client6_round4
|
RoyArkh
| 2025-08-10T12:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T12:35:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LandCruiser/sn21_omg_1008_4
|
LandCruiser
| 2025-08-10T12:06:54Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-10T11:57:17Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1754822795
|
michaelcpage345
| 2025-08-10T11:18:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T11:18:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
svjack/Qwen_Image_Xiang_Lora
|
svjack
| 2025-08-10T11:09:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T09:37:54Z |
# **LoRA Model Card**: `svjack/Qwen_Image_Xiang_Lora`
## **Character Wang Xiang, Art Synthesis with Multi-Regional Chinese Landscapes**
**Fine-tuned Adapter**: `qwen_image_xiang_lora_v1_000008250.safetensors`
**Core Strengths**:
- **Character Consistency**: Maintains detailed facial features (black-frame glasses, short hair, white teeth) across diverse settings.
- **Dynamic Composition**: Balances complex backgrounds with a central human subject through gradient transitions and multi-grid layouts.
---
## **Optimized Text-to-Image Prompts**
### **Example 1: Ice Cream Shop**
**Prompt**:
```bash
近景特写,一个18岁戴黑框眼镜、白牙齿、短发的中国青年占据画面40%以上比例,
身穿浅蓝色条纹围裙站在粉蓝色复古冰淇淋车前。他手持原木广告牌,
上面用明黄色字体写着"佰多冰淇淋——百分百鲜奶,百分百拿'瘦'!",
下方小字"喵星人认证,吃出快乐不胖肚~"。脚边蹲坐一只橘白相间的猫咪,
系红色波点领结,戴迷你厨师帽,背着小马甲。
背景可见冰淇淋车甜筒形遮阳篷和七彩气球,风格是动漫插画风格,比例是16:9
```

---
### **Example 2: Urban Business Executive**
**Prompt**:
```bash
远景,一个18岁戴黑框眼镜、白牙齿、短发的中国青年,身穿定制深灰西装,
站在纽约曼哈顿高层办公室的落地窗前。他手持激光笔指向透明数据屏上跳动的"Time Series"图表,
身后是三维投影的城市交通流量模型。左侧办公桌摆放着水晶奖杯,
右侧会议区白板写满机器学习公式。窗外夜景璀璨,
玻璃倒映着纳斯达克交易所的电子行情板。风格是商务摄影风格,比例是16:9
```
**Key Features**:
- **Symbolic Details**: Crystal trophy and NASDAQ reflection emphasize professional achievement.
- **Lighting**: Glass window doubles as canvas for contextual reflections.

---
### **Example 3: Five-Province Cultural Mosaic**
**Prompt**:
```bash
远景,一个18岁戴黑框眼镜、白牙齿、短发的中国青年,身穿红色短袖与黄色草帽,
模仿《海贼王》路飞的装扮,站在融合北疆五地风光的抽象艺术场景中。
背景左侧是西安大明宫遗址的恢弘殿宇与金色麦田,收割机在麦浪中穿行;
中部是吉林长白山的雪峰与天池倒影,点缀着白桦林与朝鲜族传统木屋;
右侧是辽宁盘锦的红海滩与丹顶鹤群,湿地栈道蜿蜒至远方;
上方是山东泰山云海与石刻“五岳独尊”,松柏掩映登山阶梯;
下方是山西平遥古城的青砖城墙与票号灯笼,驼队商旅穿行其间。
画面采用斜切五宫格构图,青年身影在五地景致间自然过渡,手持麦穗微笑,
整体色调为红金渐变,风格是抽象艺术画,比例是16:9
```
**Key Features**:
- **Spatial Harmony**: Diagonal grid partitions distinct regions while gradient transitions unify the scene.
- **Cultural Icons**: Wheat ears and lanterns symbolize regional authenticity.

---
### **Example 4: Coastal Tri-Province Panorama**
**Prompt**:
```bash
远景,一个18岁戴黑框眼镜、白牙齿、短发的中国青年,身穿高级休闲装,
站在融合广东、福建、浙江三地海岸特色的抽象艺术场景中。
背景左侧是广东海陵岛的银白沙滩与豪华游艇俱乐部,
玻璃幕墙建筑反射着阳光;中部是福建平潭的蓝眼泪荧光海与风车田剪影,
点缀着现代艺术雕塑;
右侧是浙江苍南的彩色渔村与168黄金海岸线,
蜿蜒的公路旁立着极简主义风格的路牌“168.8KM 最美海岸线”。
画面采用多宫格构图,青年身影在三地景致间自然过渡,背包上别着“向海共生”徽章,
整体色调为蓝金渐变,风格是抽象艺术画,比例是16:9
```
**Key Features**:
- **Modern-Natural Blend**: Bioluminescent waters juxtaposed with minimalist road signs.
- **Color Logic**: Blue-gradients evoke oceanic themes; gold accents highlight human structures.

---
## **Technical Parameters**
| Setting | Recommendation | Notes |
|------------------|------------------|----------------------------------------|
| **Resolution** | 1280×768 | Optimal for multi-grid detail |
| **CFG Scale** | 7.5 | Balances abstract creativity & realism |
| **Sampler** | DPM++ 2M Karras | Preserves intricate textures |
| **Steps** | 25 | Ensures landmark/clothing fidelity |
| **LoRA Weight** | 1.0 | Maintains character consistency |
---
## **Performance Profile**
- **VRAM Consumption**: ~24GB at 1280×768
---
## **License**
**CC-BY-NC-SA 4.0** (Non-commercial, share-alike)
**Community Hub**: https://huggingface.co/svjack/Qwen_Image_Xiang_Lora/discussions
|
DimaSK1/Qwen2-1.5B-bnb-4bit_sft_3
|
DimaSK1
| 2025-08-10T10:59:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen2-1.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-1.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T10:59:05Z |
---
base_model: unsloth/Qwen2-1.5B-bnb-4bit
library_name: transformers
model_name: Qwen2-1.5B-bnb-4bit_sft_3
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen2-1.5B-bnb-4bit_sft_3
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DimaSK1/Qwen2-1.5B-bnb-4bit_sft_3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_16_4_all_37_0.0001_7040_1
|
winnieyangwannan
| 2025-08-10T10:58:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T10:56:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF
|
mradermacher
| 2025-08-10T10:17:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"draft",
"speculative-decoding",
"en",
"dataset:agentlans/common-crawl-sample",
"dataset:bigcode/the-stack-smol-xl",
"dataset:rombodawg/Everything_Instruct",
"base_model:jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0",
"base_model:quantized:jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T10:12:49Z |
---
base_model: jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0
datasets:
- agentlans/common-crawl-sample
- bigcode/the-stack-smol-xl
- rombodawg/Everything_Instruct
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- draft
- speculative-decoding
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q5_K_M.gguf) | Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-GGUF/resolve/main/DeepSeek-R1-DRAFT-0.6B-v3.0.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DaijobuAI/combodo_qwen0.6_lora_16b_merged
|
DaijobuAI
| 2025-08-10T10:14:36Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T10:14:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pentester92761/489-solver-427e15177e-test-model-3
|
pentester92761
| 2025-08-10T10:02:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-10T10:02:48Z |
---
license: apache-2.0
---
|
upvantage/modernbert-combined-1k-512-20m
|
upvantage
| 2025-08-10T08:40:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-10T07:11:39Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: modernbert-combined-1k-512-20m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-combined-1k-512-20m
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0312
- Accuracy: 0.9912
- F1: 0.9912
- Precision: 0.9913
- Recall: 0.9912
- F1 Class 0: 0.9913
- Precision Class 0: 0.9861
- Recall Class 0: 0.9967
- F1 Class 1: 0.9911
- Precision Class 1: 0.9966
- Recall Class 1: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 1024
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | F1 Class 0 | Precision Class 0 | Recall Class 0 | F1 Class 1 | Precision Class 1 | Recall Class 1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|
| 0.2443 | 1.0 | 1521 | 0.0312 | 0.9912 | 0.9912 | 0.9913 | 0.9912 | 0.9913 | 0.9861 | 0.9967 | 0.9911 | 0.9966 | 0.9857 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_1_lr_0.0001_12800_all_37_epoch_2_layer_22
|
winnieyangwannan
| 2025-08-10T08:22:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T08:17:12Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DevAmeya/gemma-3
|
DevAmeya
| 2025-08-10T07:47:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T07:47:46Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DevAmeya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IzzulGod/GPT2-Indo-Instruct-Tuned
|
IzzulGod
| 2025-08-10T06:47:40Z | 1,083 | 7 | null |
[
"safetensors",
"gpt2",
"instruct-tuned",
"id",
"dataset:indonesian-nlp/wikipedia-id",
"dataset:FreedomIntelligence/evol-instruct-indonesian",
"dataset:FreedomIntelligence/sharegpt-indonesian",
"base_model:cahya/gpt2-small-indonesian-522M",
"base_model:finetune:cahya/gpt2-small-indonesian-522M",
"license:mit",
"region:us"
] | null | 2025-05-25T05:36:51Z |
---
license: mit
language:
- id
base_model:
- cahya/gpt2-small-indonesian-522M
tags:
- instruct-tuned
datasets:
- indonesian-nlp/wikipedia-id
- FreedomIntelligence/evol-instruct-indonesian
- FreedomIntelligence/sharegpt-indonesian
---
# GPT2-Small Indonesian Instruct-Tuned Model
An Indonesian conversational AI model fine-tuned from `GPT2-Small (124M Parameters)` using instruction-following techniques to enable natural chat-like interactions in Bahasa Indonesia.
## 📋 Model Overview
This model transforms a base Indonesian GPT-2 text generator into a conversational chatbot capable of following instructions and engaging in question-answering dialogues. The model has been specifically optimized for Indonesian language understanding and generation.
- **Base Model**: `GPT2-Small (124M Parameters)`
- **Fine-tuning Method**: SFT-LoRA (Supervised Fine-Tuning with Low-Rank Adaptation)
- **Training Datasets**:
- `indonesian-nlp/wikipedia-id` (knowledge base)
- `FreedomIntelligence/evol-instruct-indonesian` (instruction following)
- `FreedomIntelligence/sharegpt-indonesian` (conversational patterns)
- **Primary Language**: Indonesian (Bahasa Indonesia)
- **Task**: Conversational AI / Chat Completion
- **License**: MIT
## 🧪 Project Background
This model was developed as part of a personal learning journey in AI and Large Language Models (LLMs). The entire training process was conducted on **Google Colab's** free tier using **T4 GPU**, demonstrating how to build effective Indonesian conversational AI with limited computational resources.
The project focuses on creating an accessible Indonesian language model that can understand context, follow instructions, and provide helpful responses in natural Bahasa Indonesia.
## 🚀 Quick Start
### Installation
```bash
pip install transformers torch
```
### Basic Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Setup device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
# Load model and tokenizer
model_path = "IzzulGod/GPT2-Indo-Instruct-Tuned"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
# Create prompt
prompt = "User: Siapa presiden pertama Indonesia?\nAI:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
# Generate response
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|endoftext|>")
)
# Decode response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 🎯 Model Capabilities
The model demonstrates strong performance across several Indonesian language tasks:
- **Question Answering**: Provides accurate responses to factual questions in Indonesian
- **Instruction Following**: Capable of understanding and executing various instructions and tasks
- **Conversational Context**: Maintains coherent context throughout chat-like interactions
- **Code Generation**: Can generate simple code snippets (Python, R, etc.) with clear Indonesian explanations
- **Educational Content**: Explains complex concepts in accessible Indonesian language
- **Cultural Awareness**: Understands Indonesian cultural context and references
## 📊 Training Details
### Dataset Composition
The model was trained on a carefully curated combination of datasets to balance knowledge, instruction-following, and conversational abilities:
**Training Data Format:**
```json
[
{
"from": "human",
"value": "Question or instruction in Indonesian"
},
{
"from": "gpt",
"value": "Detailed and helpful response in Indonesian"
}
]
```
### Training Configuration
The model was fine-tuned using LoRA (Low-Rank Adaptation) technique, which allows efficient training while preserving the base model's capabilities:
**LoRA Configuration:**
- **Rank (r)**: 64 - Higher rank for better adaptation capacity
- **Alpha**: 128 - Scaling factor for LoRA weights
- **Target Modules**: `["c_attn", "c_proj", "mlp.c_fc", "mlp.c_proj"]` - Key transformer components
- **Dropout**: 0.05 - Regularization to prevent overfitting
- **Bias**: "none" - Focus adaptation on weight matrices
**Training Hyperparameters:**
- **Epochs**: 3 - Sufficient for convergence without overfitting
- **Batch Size**: 16 per device - Optimized for T4 GPU memory
- **Gradient Accumulation**: 2 steps - Effective batch size of 32
- **Learning Rate**: 2e-4 - Conservative rate for stable training
- **Scheduler**: Cosine annealing - Smooth learning rate decay
- **Weight Decay**: 0.01 - L2 regularization
- **Mixed Precision**: FP16 enabled - Memory and speed optimization
### Training Progress
The model showed consistent improvement throughout training:
```
Training Progress (5535 total steps over 3 epochs):
Step Training Loss
200 3.533500 # Initial high loss
400 2.964200 # Rapid initial improvement
...
4000 2.416200 # Stable convergence
...
5400 2.397500 # Final optimized loss
Final Metrics:
- Training Loss: 2.573
- Training Time: 3.5 hours
- Samples per Second: 14.049
- Total Training Samples: ~177k
```
The steady decrease from 3.53 to 2.39 demonstrates effective learning and adaptation to the Indonesian instruction-following task.
## 🔧 Advanced Usage
### Generation Parameter Tuning
**For Creative Responses:**
```python
outputs = model.generate(
**inputs,
max_new_tokens=256, # Longer responses
temperature=0.8, # More randomness
top_p=0.9, # Diverse vocabulary
repetition_penalty=1.2 # Avoid repetition
)
```
**For Focused/Factual Responses:**
```python
outputs = model.generate(
**inputs,
max_new_tokens=128, # Concise responses
temperature=0.6, # More deterministic
top_p=0.95, # High-quality tokens
repetition_penalty=1.1 # Mild repetition control
)
```
### Prompt Engineering
**Recommended Format:**
```
User: [Your question or instruction in Indonesian]
AI: [Expected response starts here]
```
## ⚠️ Limitations and Considerations
**Knowledge Limitations:**
- **Training Data Cutoff**: Knowledge is limited to the training datasets, primarily Wikipedia-based information
- **Factual Accuracy**: Generated responses may not always be factually accurate or up-to-date
- **Real-time Information**: Cannot access current events or real-time data
**Technical Limitations:**
- **Model Size**: With 124M parameters, complex reasoning capabilities are limited compared to larger models
- **Context Length**: Limited context window may affect very long conversations
- **Language Specialization**: Optimized primarily for Indonesian; other languages may produce suboptimal results
**Response Characteristics:**
- **Formality**: Responses may occasionally sound formal due to Wikipedia-based training data
- **Consistency**: May generate repetitive patterns or inconsistent information across sessions
- **Cultural Nuances**: While trained on Indonesian data, may miss subtle cultural references or regional variations
## 🚀 Future Development Roadmap
**Short-term Improvements:**
- Fine-tuning with more diverse conversational datasets
- Integration of current Indonesian news and cultural content
- Specialized domain adaptations (education, healthcare, business)
## 📝 License
This model is released under the MIT License, Please see the LICENSE file for complete terms.
## 🙏 Acknowledgments
- **Base Model**: Thanks to [Cahya](https://huggingface.co/cahya) for the Indonesian GPT-2 base model
- **Datasets**: FreedomIntelligence team for Indonesian instruction and conversation datasets
- **Infrastructure**: Google Colab for providing accessible GPU resources for training
---
**Disclaimer**: This model was developed as an experimental project for educational and research purposes. While it demonstrates good performance on various tasks, users should validate outputs for critical applications and be aware of the limitations outlined above.
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754807823
|
fatepurriyaz
| 2025-08-10T06:37:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T06:37:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/ValiantLabs.Qwen3-1.7B-ShiningValiant3-GGUF
|
DevQuasar
| 2025-08-10T06:37:11Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ValiantLabs/Qwen3-1.7B-ShiningValiant3",
"base_model:quantized:ValiantLabs/Qwen3-1.7B-ShiningValiant3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-10T06:05:06Z |
---
base_model:
- ValiantLabs/Qwen3-1.7B-ShiningValiant3
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [ValiantLabs/Qwen3-1.7B-ShiningValiant3](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
laampt/gpt-oss-20b-vincent-merged
|
laampt
| 2025-08-10T04:23:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T04:15:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akaul24/llama3.2-3B-instruct-third-try
|
akaul24
| 2025-08-10T04:22:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T04:10:37Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** akaul24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754798814
|
kapalbalap
| 2025-08-10T04:08:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T04:07:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
laampt/gpt-oss-20b-multilingual-reasoner
|
laampt
| 2025-08-10T03:57:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T03:56:16Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="laampt/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Mungert/rwkv7-2.9B-g1-GGUF
|
Mungert
| 2025-08-10T03:15:52Z | 2,428 | 3 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"ja",
"ko",
"fr",
"ar",
"es",
"pt",
"arxiv:2503.14456",
"base_model:BlinkDL/rwkv7-g1",
"base_model:quantized:BlinkDL/rwkv7-g1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-07-25T02:58:39Z |
---
license: apache-2.0
language:
- en
- zh
- ja
- ko
- fr
- ar
- es
- pt
metrics:
- accuracy
base_model:
- BlinkDL/rwkv7-g1
pipeline_tag: text-generation
---
# <span style="color: #7FFF7F;">rwkv7-2.9B-g1 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`3f4fc97f`](https://github.com/ggerganov/llama.cpp/commit/3f4fc97f1d745f1d5d3c853949503136d419e6de).
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# rwkv7-2.9B-g1
<!-- Provide a quick summary of what the model is/does. -->
This is RWKV-7 model under flash-linear attention format.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang, Zhiyuan Li
- **Funded by:** RWKV Project (Under LF AI & Data Foundation)
- **Model type:** RWKV7
- **Language(s) (NLP):** Multilingual
- **License:** Apache-2.0
- **Parameter count:** 2.9B
- **Tokenizer:** RWKV World tokenizer
- **Vocabulary size:** 65,536
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
- **Paper:** https://arxiv.org/abs/2503.14456
- **Model:** https://huggingface.co/BlinkDL/rwkv7-g1/resolve/main/rwkv7-g1-2.9b-20250519-ctx4096.pth
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Install `flash-linear-attention` and the latest version of `transformers` before using this model:
```bash
pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'
```
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model just as any other HuggingFace models:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-2.9B-g1', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-2.9B-g1', trust_remote_code=True)
model = model.cuda() # Supported on Nvidia/AMD/Intel eg. model.xpu()
prompt = "What is a large language model?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Default is True, set to False to disable thinking
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024,
do_sample=True,
temperature=1.0,
top_p=0.3,
repetition_penalty=1.2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
print(response)
```
## FAQ
Q: safetensors metadata is none.
A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
Mungert/rwkv7-7.2B-g0-GGUF
|
Mungert
| 2025-08-10T03:15:29Z | 1,492 | 2 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"ja",
"ko",
"fr",
"ar",
"es",
"pt",
"arxiv:2503.14456",
"base_model:BlinkDL/rwkv7-g1",
"base_model:quantized:BlinkDL/rwkv7-g1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-07-24T19:57:31Z |
---
license: apache-2.0
language:
- en
- zh
- ja
- ko
- fr
- ar
- es
- pt
metrics:
- accuracy
base_model:
- BlinkDL/rwkv7-g1
pipeline_tag: text-generation
---
# <span style="color: #7FFF7F;">rwkv7-7.2B-g0 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`e4868d16`](https://github.com/ggerganov/llama.cpp/commit/e4868d16d24dec55e61bcaadaca28feed8f98b13).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# rwkv7-7.2B-g0
<!-- Provide a quick summary of what the model is/does. -->
This is RWKV-7 model under flash-linear attention format.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang, Zhiyuan Li
- **Funded by:** RWKV Project (Under LF AI & Data Foundation)
- **Model type:** RWKV7
- **Language(s) (NLP):** Multilingual
- **License:** Apache-2.0
- **Parameter count:** 7.2B
- **Tokenizer:** RWKV World tokenizer
- **Vocabulary size:** 65,536
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
- **Paper:** https://arxiv.org/abs/2503.14456
- **Model:** https://huggingface.co/BlinkDL/rwkv7-g1/resolve/main/rwkv7-g1-2.9b-20250519-ctx4096.pth
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Install `flash-linear-attention` and the latest version of `transformers` before using this model:
```bash
pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'
```
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model just as any other HuggingFace models:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-7.2B-g0', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-7.2B-g0', trust_remote_code=True)
model = model.cuda() # Supported on Nvidia/AMD/Intel eg. model.xpu()
prompt = "What is a large language model?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Default is True, set to False to disable thinking
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024,
do_sample=True,
temperature=1.0,
top_p=0.3,
repetition_penalty=1.2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
print(response)
```
## FAQ
Q: safetensors metadata is none.
A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
Mungert/granite-20b-code-base-8k-GGUF
|
Mungert
| 2025-08-10T02:57:47Z | 817 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"granite",
"text-generation",
"dataset:codeparrot/github-code-clean",
"dataset:bigcode/starcoderdata",
"dataset:open-web-math/open-web-math",
"dataset:math-ai/StackMathQA",
"arxiv:2405.04324",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-06T22:45:11Z |
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-base-8k
results:
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 43.8
veriefied: false
- task:
type: text-generation
dataset:
type: evalplus/mbppplus
name: MBPP+
metrics:
- name: pass@1
type: pass@1
value: 51.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 48.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 50.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 59.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 35.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 17.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 10.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 25.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false # Check
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 14.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 26.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 15.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 3.0
veriefied: false
---
# <span style="color: #7FFF7F;">granite-20b-code-base-8k GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`0a5a3b5c`](https://github.com/ggerganov/llama.cpp/commit/0a5a3b5cdfd887cf0f8e09d9ff89dee130cfcdde).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->

# Granite-20B-Code-Base-8K
## Model Summary
**Granite-20B-Code-Base-8K** is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing, etc.). It is trained from scratch with a two-phase training strategy. In phase 1, our model is trained on 3 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
Prominent enterprise use cases of LLMs in software engineering productivity include code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **20B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages.
### Generation
This is a simple example of how to use **Granite-20B-Code-Base-8K** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-20b-code-base-8k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "def generate():"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
## Training Data
- **Data Collection and Filtering:** Pretraining code data is sourced from a combination of publicly available datasets (e.g., [GitHub Code Clean](https://huggingface.co/datasets/codeparrot/github-code-clean), [Starcoder data](https://huggingface.co/datasets/bigcode/starcoderdata)), and additional public code repositories and issues from GitHub. We filter raw data to retain a list of 116 programming languages. After language filtering, we also filter out low-quality code.
- **Exact and Fuzzy Deduplication:** We adopt an aggressive deduplication strategy that includes both exact and fuzzy deduplication to remove documents having (near) identical code content.
- **HAP, PII, Malware Filtering:** We apply a HAP content filter that reduces models' likelihood of generating hateful, abusive, or profane language. We also make sure to redact Personally Identifiable Information (PII) by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩). Moreover, we scan all datasets using [ClamAV](https://www.clamav.net/) to identify and remove instances of malware in the source code.
- **Natural Language Datasets:** In addition to collecting code data for model training, we curate several publicly available high-quality natural language datasets to improve models' proficiency in language understanding and mathematical reasoning. Unlike the code data, we do not deduplicate these datasets.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-20B-Code-Base-8K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-20B-Code-Base-8K** model with ethical intentions and in a responsible way.
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
Mungert/OpenMath-Nemotron-1.5B-GGUF
|
Mungert
| 2025-08-10T02:18:53Z | 1,481 | 0 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"math",
"text-generation",
"en",
"dataset:nvidia/OpenMathReasoning",
"arxiv:2504.16891",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:quantized:Qwen/Qwen2.5-Math-1.5B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-05-10T16:13:46Z |
---
base_model:
- Qwen/Qwen2.5-Math-1.5B
datasets:
- nvidia/OpenMathReasoning
language:
- en
library_name: transformers
license: cc-by-4.0
tags:
- nvidia
- math
pipeline_tag: text-generation
---
# <span style="color: #7FFF7F;">OpenMath-Nemotron-1.5B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`19e899c`](https://github.com/ggerganov/llama.cpp/commit/19e899ce21a7c9ffcf8bb2b22269a75f6e078f8f).
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `OpenMath-Nemotron-1.5B-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `OpenMath-Nemotron-1.5B-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `OpenMath-Nemotron-1.5B-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `OpenMath-Nemotron-1.5B-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `OpenMath-Nemotron-1.5B-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `OpenMath-Nemotron-1.5B-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `OpenMath-Nemotron-1.5B-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `OpenMath-Nemotron-1.5B-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `OpenMath-Nemotron-1.5B-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `OpenMath-Nemotron-1.5B-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `OpenMath-Nemotron-1.5B-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4o-mini)
- `HugLLM` (Hugginface Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example commands to you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
# OpenMath-Nemotron-1.5B
OpenMath-Nemotron-1.5B is created by finetuning [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) dataset.
This model is ready for commercial use.

OpenMath-Nemotron models achieve state-of-the-art results on popular mathematical benchmarks. We present metrics as pass@1 (maj@64) where pass@1
is an average accuracy across 64 generations and maj@64 is the result of majority voting.
Please see our [paper](https://arxiv.org/abs/2504.16891) for more details on the evaluation setup.
| Model | AIME24 | AIME25 | HMMT-24-25 | HLE-Math |
|-------------------------------|-----------------|-------|-------|-------------|
| DeepSeek-R1-Distill-Qwen-1.5B | 26.8 (60.0) | 21.4 (36.7) | 14.2 (26.5) | 2.9 (5.0) |
| [OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B) CoT | 61.6 (80.0) | 49.5 (66.7) | 39.9 (53.6) | 5.4 (5.4) |
| [OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B) TIR | 52.0 (83.3) | 39.7 (70.0) | 37.2 (60.7) | 2.5 (6.2) |
| + Self GenSelect | 83.3 | 70.0 | 62.2 | 7.9 |
| + 32B GenSelect | 83.3 | 70.0 | 62.8 | 8.3 |
| DeepSeek-R1-Distill-Qwen-7B | 54.4 (80.0) | 38.6 (53.3) | 30.6 (42.9) | 3.3 (5.2) |
| [OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) CoT | 74.8 (80.0) | 61.2 (76.7) | 49.7 (57.7) | 6.6 (6.6) |
| [OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) TIR | 72.9 (83.3) | 57.5 (76.7) | 54.6 (66.3) | 7.8 (10.8) |
| + Self GenSelect | 86.7 | 76.7 | 68.4 | 11.5 |
| + 32B GenSelect | 86.7 | 76.7 | 69.9 | 11.9 |
| DeepSeek-R1-Distill-Qwen-14B | 65.8 (80.0) | 48.4 (60.0) | 40.1 (52.0) | 4.2 (4.8) |
| [OpenMath-Nemotron-14B-MIX (kaggle)](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) | 73.7 (86.7) | 57.9 (73.3) | 50.5 (64.8) | 5.7 (6.5) |
| [OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B) CoT | 76.3 (83.3) | 63.0 (76.7) | 52.1 (60.7) | 7.5 (7.6) |
| [OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B) TIR | 76.3 (86.7) | 61.3 (76.7) | 58.6 (70.9) | 9.5 (11.5) |
| + Self GenSelect | 86.7 | 76.7 | 72.4 | 14.1 |
| + 32B GenSelect | 90.0 | 76.7 | 71.9 | 13.7 |
| QwQ-32B | 78.1 (86.7) | 66.5 (76.7) | 55.9 (63.3) | 9.0 (9.5) |
| DeepSeek-R1-Distill-Qwen-32B | 66.9 (83.3) | 51.8 (73.3) | 39.9 (51.0) | 4.8 (6.0) |
| [OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B) CoT | 76.5 (86.7) | 62.5 (73.3) | 53.0 (59.2) | 8.3 (8.3) |
| [OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B) TIR | 78.4 (93.3) | 64.2 (76.7) | 59.7 (70.9) | 9.2 (12.5) |
| + Self GenSelect | 93.3 | 80.0 | 73.5 | 15.7 |
| DeepSeek-R1 | 79.1 (86.7) | 64.3 (73.3) | 53.0 (59.2) | 10.5 (11.4) |
We used [a version of OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) model to secure
the first place in [AIMO-2 Kaggle competition](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/leaderboard)!
## Reproducing our results
The pipeline we used to produce the data and models is fully open-sourced!
- [Code](https://github.com/NVIDIA/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmathreasoning-68072c0154a5099573d2e730)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathReasoning)
- [Paper](https://arxiv.org/abs/2504.16891)
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
to fully reproduce our results, including data generation.
## How to use the models?
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
To run inference with CoT mode, you can use this example code snippet.
```python
import transformers
import torch
model_id = "nvidia/OpenMath-Nemotron-1.5B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{
"role": "user",
"content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.
" +
"What is the minimum value of $a^2+6a-7$?"},
]
outputs = pipeline(
messages,
max_new_tokens=4096,
)
print(outputs[0]["generated_text"][-1]['content'])
```
To run inference with TIR or GenSelect modes, we highly recommend to use our
[reference implementation in NeMo-Skills](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/evaluation/).
Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{moshkov2025aimo2,
title = {AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset},
author = {Ivan Moshkov and Darragh Hanley and Ivan Sorokin and Shubham Toshniwal and Christof Henkel and Benedikt Schifferer and Wei Du and Igor Gitman},
year = {2025},
journal = {arXiv preprint arXiv:2504.16891}
}
```
## Additional information
### License/Terms of Use: <br>
GOVERNING TERMS: Use of this model is governed by [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode.en).
Additional Information: [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B/blob/main/LICENSE).
### Deployment Geography:
Global <br>
### Use Case: <br>
This model is intended to facilitate research in the area of mathematical reasoning.
### Release Date: <br>
Huggingface 04/23/2025 <br>
### Model Architecture: <br>
**Architecture Type:** Transformer decoder-only language model <br>
**Network Architecture:** Qwen2.5 <br>
**This model was developed based on Qwen2.5-1.5B <br>
** This model has 1.5B of model parameters. <br>
### Input: <br>
**Input Type(s):** Text <br>
**Input Format(s):** String <br>
**Input Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Input:** Context length up to 131,072 tokens <br>
### Output: <br>
**Output Type(s):** Text <br>
**Output Format:** String <br>
**Output Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Output:** Context length up to 131,072 tokens <br>
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
### Software Integration : <br>
**Runtime Engine(s):** <br>
* Tensor RT / Triton <br>
**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere <br>
* NVIDIA Hopper <br>
**Preferred Operating System(s):** <br>
* Linux <br>
### Model Version(s):
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
[OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B)
[OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B)
[OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B)
# Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY.md), and [Privacy](./PRIVACY.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
Mungert/Qwen3-0.6B-abliterated-GGUF
|
Mungert
| 2025-08-10T02:13:08Z | 898 | 9 |
transformers
|
[
"transformers",
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T00:56:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DoppelReflEx/test-24
|
DoppelReflEx
| 2025-08-10T02:09:00Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Aratako/Mistral-Small-3.1-24B-RP",
"base_model:merge:Aratako/Mistral-Small-3.1-24B-RP",
"base_model:Gryphe/Codex-24B-Small-3.2",
"base_model:merge:Gryphe/Codex-24B-Small-3.2",
"base_model:ZeroAgency/Zero-Mistral-24B",
"base_model:merge:ZeroAgency/Zero-Mistral-24B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T10:23:45Z |
---
base_model:
- ZeroAgency/Zero-Mistral-24B
- Aratako/Mistral-Small-3.1-24B-RP
- Gryphe/Codex-24B-Small-3.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Gryphe/Codex-24B-Small-3.2](https://huggingface.co/Gryphe/Codex-24B-Small-3.2) as a base.
### Models Merged
The following models were included in the merge:
* [ZeroAgency/Zero-Mistral-24B](https://huggingface.co/ZeroAgency/Zero-Mistral-24B)
* [Aratako/Mistral-Small-3.1-24B-RP](https://huggingface.co/Aratako/Mistral-Small-3.1-24B-RP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Codex-24B-Small-3.2
parameters:
weight: 0.6
density: 1
- model: ZeroAgency/Zero-Mistral-24B
parameters:
weight: 0.2
density: 0.6
- model: Aratako/Mistral-Small-3.1-24B-RP
parameters:
weight: 0.2
density: 0.6
merge_method: dare_ties
base_model: Gryphe/Codex-24B-Small-3.2
tokenizer_source: base
parameters:
rescale: true
dtype: bfloat16
```
|
Mungert/all-MiniLM-L6-v2-GGUF
|
Mungert
| 2025-08-10T02:00:16Z | 1,753 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-23T03:05:06Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# <span style="color: #7FFF7F;">all-MiniLM-L6-v2 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`adef8178`](https://github.com/ggerganov/llama.cpp/commit/adef81781a15083f218eae6c488b95cdad781971).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754790900
|
afasdfdfadsf
| 2025-08-10T01:57:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T01:56:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mungert/OlympicCoder-7B-GGUF
|
Mungert
| 2025-08-10T01:51:26Z | 471 | 4 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"dataset:open-r1/codeforces-cots",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-16T02:40:47Z |
---
license: apache-2.0
datasets:
- open-r1/codeforces-cots
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
# <span style="color: #7FFF7F;">OlympicCoder-7B GGUF Models</span>
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `OlympicCoder-7B-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `OlympicCoder-7B-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `OlympicCoder-7B-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `OlympicCoder-7B-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `OlympicCoder-7B-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `OlympicCoder-7B-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `OlympicCoder-7B-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `OlympicCoder-7B-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `OlympicCoder-7B-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `OlympicCoder-7B-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `OlympicCoder-7B-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com)
💬 **How to test**:
1. Click the **chat icon** (bottom right on any page)
2. Choose an **AI assistant type**:
- `TurboLLM` (GPT-4-mini)
- `FreeLLM` (Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Metasploit integration**
🟡 **TestLLM** – Current experimental model (llama.cpp on 6 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4-mini** for:
- **Real-time network diagnostics**
- **Automated penetration testing** (Nmap/Metasploit)
- 🔑 Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
🔵 **HugLLM** – Open-source models (≈8B params):
- **2x more tokens** than TurboLLM
- **AI-powered log analysis**
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example AI Commands to Test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a quick Nmap vulnerability test"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
### Final word
I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
This will help me pay for the services and increase the token limits for everyone.
Thank you :)
# Model Card for OlympicCoder-7B
OlympicCoder-7B is a code model that achieves strong performance on competitive coding benchmarks such as LiveCodeBench and the 2024 International Olympiad in Informatics.
* Repository: https://github.com/huggingface/open-r1
* Blog post: https://huggingface.co/blog/open-r1/update-3
## Model description
- **Model type:** A 7B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- **Language(s) (NLP):** Primarily English
- **License:** apache-2.0
- **Finetuned from model:** [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
## Evaluation
We compare the performance of OlympicCoder models on two main benchmarks for competitive coding:
* **[IOI'2024:](https://github.com/huggingface/ioi)** 6 very challenging problems from the 2024 International Olympiad in Informatics. Models are allowed up to 50 submissions per problem.
* **[LiveCodeBench:](https://livecodebench.github.io)** Python programming problems source from platforms like CodeForces and LeetCoder. We use the `v4_v5` subset of [`livecodebench/code_generation_lite`](https://huggingface.co/datasets/livecodebench/code_generation_lite), which corresponds to 268 problems. We use `lighteval` to evaluate models on LiveCodeBench using the sampling parameters described [here](https://github.com/huggingface/open-r1?tab=readme-ov-file#livecodebench).
> [!NOTE]
> The OlympicCoder models were post-trained exclusively on C++ solutions generated by DeepSeek-R1. As a result the performance on LiveCodeBench should be considered to be partially _out-of-domain_, since this expects models to output solutions in Python.
### IOI'24

### LiveCodeBench

## Usage
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="open-r1/OlympicCoder-7B", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
```
> [!WARNING]
> To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method. To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill.
## Training procedure
### Training hyper-parameters
The following hyperparameters were used during training:
- dataset: open-r1/codeforces-cots
- learning_rate: 4.0e-5
- train_batch_size: 2
- seed: 42
- packing: false
- distributed_type: deepspeed-zero-3
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0
|
John6666/beyond-experimental-8-alpha-sdxl
|
John6666
| 2025-08-10T01:08:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"realism",
"anime",
"cartoon",
"styles",
"SDXL Turbo",
"en",
"base_model:OperationNova/BeyondSDXL",
"base_model:finetune:OperationNova/BeyondSDXL",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-10T00:59:18Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- realism
- anime
- cartoon
- styles
- SDXL Turbo
base_model: OperationNova/BeyondSDXL
---
Original model is [here](https://civitai.com/models/424895/beyond-experimental?modelVersionId=2095677).
The author is [here](https://huggingface.co/OperationNova).
This model created by [OperationNova](https://civitai.com/user/OperationNova).
|
ajjyy/Qwen2-0.5B-PPO-gsm8k-attempt3
|
ajjyy
| 2025-08-09T23:29:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ppo",
"trl",
"arxiv:1909.08593",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T19:31:26Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-PPO-gsm8k-attempt3
tags:
- generated_from_trainer
- ppo
- trl
licence: license
---
# Model Card for Qwen2-0.5B-PPO-gsm8k-attempt3
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ajjyy/Qwen2-0.5B-PPO-gsm8k-attempt3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ajyang-massachusetts-institute-of-technology/gsm8k_ppo_longer/runs/w1kt7u10)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.20.0.dev0
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
uname0x96/blockassist-bc-rough_scavenging_narwhal_1754781544
|
uname0x96
| 2025-08-09T23:20:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough scavenging narwhal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T23:20:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough scavenging narwhal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-1000
|
mohammad-shirkhani
| 2025-08-09T22:48:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T20:29:23Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-GRPO-rating-1000
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-GRPO-rating-1000
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-1000", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.47.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754777227
|
zenqqq
| 2025-08-09T22:21:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T22:21:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wvnvwn/full_llama3_8b_sft_offncpo
|
wvnvwn
| 2025-08-09T21:15:52Z | 2 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-09T21:12:06Z |
---
license: apache-2.0
---
|
nightmedia/Hunyuan-0.5B-Instruct-dwq4-mlx
|
nightmedia
| 2025-08-09T20:42:02Z | 21 | 0 |
mlx
|
[
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-0.5B-Instruct",
"base_model:quantized:tencent/Hunyuan-0.5B-Instruct",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-09T20:09:33Z |
---
base_model: tencent/Hunyuan-0.5B-Instruct
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# Hunyuan-0.5B-Instruct-dwq4-mlx
this model comes in Strarbery flavor:
```bash
How many R are in Strawberry
Hunyuan-0.5B-Instruct-dwq4-mlx
<think>
Okay, let's see. The user is asking how many "R" are in "Strawberry." First, I need to remember the spelling of "Strawberry." Let me recall: "Strawberry" is spelled S-T-R-A-R-B-E-R-Y. Let's break it down letter by letter.
Starting with the first letter: S. Then T. Next is R. Then A. Then R again. Then B. Then E. Then R. Then Y.
So let's list them out: S, T, R, A, R, B, E, R, Y. Now, I need to count the "R" characters. Let's go through each letter:
1. S (no)
2. T (no)
3. R (yes, count 1)
4. A (no)
5. R (yes, count 2)
6. B (no)
7. E (no)
8. R (yes, count 3)
9. Y (no)
So that's three "R"s. Let me double-check to make sure I didn't miss any. "Strawberry" is S-T-R-A-R-B-E-R-Y.
Let's spell it again
```
Right, for the wrong reason
538.516 tok/sec
This model [Hunyuan-0.5B-Instruct-dwq4-mlx](https://huggingface.co/Hunyuan-0.5B-Instruct-dwq4-mlx) was
converted to MLX format from [tencent/Hunyuan-0.5B-Instruct](https://huggingface.co/tencent/Hunyuan-0.5B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Hunyuan-0.5B-Instruct-dwq4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ecamli/blockassist-bc-hulking_soft_hippo_1754771954
|
ecamli
| 2025-08-09T20:39:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T20:39:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mauroicardi43/blockassist-bc-giant_agile_crab_1754770930
|
Mauroicardi43
| 2025-08-09T20:23:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant agile crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T20:22:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant agile crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ljnlonoljpiljm/siglip2-base-patch16-256-crop-aesthetics-4
|
ljnlonoljpiljm
| 2025-08-09T20:04:21Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-09T20:04:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF
|
mradermacher
| 2025-08-09T20:00:05Z | 3,596 | 0 |
transformers
|
[
"transformers",
"gguf",
"roleplay",
"conversational",
"en",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:nvidia/HelpSteer3",
"dataset:zerofata/Roleplay-Anime-Characters",
"dataset:zerofata/Instruct-Anime-CreativeWriting",
"base_model:p-e-r-e-g-r-i-n-e/Q3-Nilcall-32B-v0.1",
"base_model:quantized:p-e-r-e-g-r-i-n-e/Q3-Nilcall-32B-v0.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-08-09T16:39:00Z |
---
base_model: p-e-r-e-g-r-i-n-e/Q3-Nilcall-32B-v0.1
datasets:
- allenai/tulu-3-sft-personas-instruction-following
- nvidia/HelpSteer3
- zerofata/Roleplay-Anime-Characters
- zerofata/Instruct-Anime-CreativeWriting
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- roleplay
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/p-e-r-e-g-r-i-n-e/Q3-Nilcall-32B-v0.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Q3-Nilcall-32B-v0.1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-Nilcall-32B-v0.1-i1-GGUF/resolve/main/Q3-Nilcall-32B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
scsocial/marketflow
|
scsocial
| 2025-08-09T17:20:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-09T17:17:37Z |
Marketing SaaS Scaffold
=======================
What you have:
- server/ : Express-based OpenAI proxy (server.js)
- client/ : Vite + React simple client (src/App.jsx)
- Dockerfile + docker-compose.yml : quick dev container for server
- This scaffold is minimal and meant for local testing & extension.
Quick start (local, requires Node 18+)
-------------------------------------
1) Server-only (recommended for testing quickly):
cd server
npm install
# set your OPENAI_API_KEY environment variable first, e.g.:
# export OPENAI_API_KEY="sk-..."
npm start
The server will be available at http://localhost:5174
You can then run the client dev server:
2) Client:
cd client
npm install
npm run dev
Open the printed Vite URL (http://localhost:5173 by default).
Note: For the client to call the server proxy in dev, you may need to set up a proxy in Vite config
or run the client with the server as a reverse proxy. Alternatively, run the client build and serve
statically via the server.
3) Docker (server)
docker-compose up --build
# set OPENAI_API_KEY in your environment when running compose:
OPENAI_API_KEY=sk-... docker-compose up --build
Next steps:
- Add authentication (Clerk/Auth0) and persistent DB (Postgres + Prisma).
- Add S3 storage for assets.
- Replace simple client with the 3D UI we built earlier (copy the client code).
- Harden rate limits, add logging, monitoring and billing integrations.
|
ImparkTeam/LLama-8b-8math-tutor-GGUF
|
ImparkTeam
| 2025-08-09T17:17:50Z | 500 | 0 |
llama-cpp
|
[
"llama-cpp",
"safetensors",
"gguf",
"llama",
"llama-3",
"unsloth",
"turkish",
"mathematics",
"8th-grade",
"imparkteam",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T17:08:06Z |
---
license: apache-2.0
language:
- tr
library_name: llama-cpp
tags:
- llama-3
- unsloth
- turkish
- mathematics
- 8th-grade
- gguf
- imparkteam
---
# llamas-8b-8math-tutor-GGUF 🧮
Bu repository, `ImparkTeam/LLama-8b-8math-tutor` LoRA adaptörlerinin, `ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1` temel modeli üzerine birleştirilerek oluşturulmuş GGUF versiyonlarını içerir.
Bu modeller, özellikle **CPU üzerinde** ve internet bağlantısı olmadan **çevrimdışı** çalışacak şekilde tasarlanmış masaüstü uygulamaları için optimize edilmiştir.
## 📚 Proje Hakkında
Bu model, 8. sınıf öğrencilerine matematik konularında yardımcı olmak üzere tasarlanmış, konuşma formatında bir yapay zeka asistanıdır. Model, iki aşamalı bir strateji ile eğitilmiştir:
1. **Dil Adaptasyonu:** `ytu-ce-cosmos` ekibi tarafından Türkçe'ye adapte edilmiş sağlam bir temel model kullanılmıştır.
2. **Uzmanlık Adaptasyonu:** Bu temel model üzerine, 30,000 satırlık 8. sınıf matematik veri setiyle `[TASK]` etiketli talimat formatlaması kullanılarak LoRA ile fine-tuning yapılmıştır.
Bu yaklaşım, modelin hem güçlü Türkçe yeteneğini korumasını hem de spesifik bir alanda uzmanlaşmasını sağlamıştır.
## ⚙️ Model Versiyonları
Bu repository, farklı ihtiyaçlara yönelik çeşitli kuantizasyon seviyeleri sunar:
| Dosya Adı | Boyut | Kalite | Önerilen RAM | Kullanım Alanı |
|---|---|---|---|---|
| **`...-q4_k_m.gguf`** | ~4.9 GB | Yüksek Kalite | 8 GB+ | **Önerilen.** Çoğu masaüstü uygulaması için ideal boyut/performans dengesi. |
| **`...-q5_k_m.gguf`** | ~5.7 GB | Çok Yüksek Kalite | 16 GB+ | Kaliteye öncelik verenler için. |
| **`...-q8_0.gguf`** | ~8.5 GB | Neredeyse Kayıpsız | 16 GB+ | En yüksek doğruluk, daha fazla kaynak gerektirir. |
| **`...-f16.gguf`** | ~16.1 GB| Tam Hassasiyet | 32 GB+ | Kuantize edilmemiş, arşiv veya güçlü GPU'larda kullanım için. |
## 💻 `llama-cpp-python` ile Kullanım
```python
from llama_cpp import Llama
# Modeli yükle (Örnek olarak q4_k_m kullanılmıştır)
# n_gpu_layers=-1 parametresi, varsa tüm katmanları GPU'ya yüklemeye çalışır.
# GPU yoksa, model tamamen CPU üzerinde çalışır.
llm = Llama(
model_path="./LLama-8b-8math-tutor-GGUF/LLama-8b-8math-tutor-q4_k_m.gguf",
n_ctx=2048,
n_gpu_layers=-1,
n_threads=4, # CPU çekirdek sayınıza göre ayarlayın
chat_format="chatml"
)
# Sohbet geçmişini oluştur
messages = [
{"role": "system", "content": "Sen 8. sınıf seviyesinde, tarafsız ve destekleyici bir matematik öğretmenisin."},
{"role": "user", "content": "EBOB ve EKOK arasındaki fark nedir?"}
]
# Cevap üret
response = llm.create_chat_completion(messages=messages)
print(response['choices'][0]['message']['content'])
```
## 📊 Eğitim Detayları
### Hiperparametreler
| Parametre | Değer |
|---|---|
| **base_model** | `ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1` |
| **lora_r** | 16 |
| **lora_alpha** | 32 |
| **lora_dropout** | 0 |
| **target_modules** | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| **batch_size** | 8 |
| **num_train_epochs**| 2 |
| **learning_rate** | 2e-4 |
| **optim** | adamw_8bit |
### Sonuçlar
- **Final Training Loss:** 0.7500
Bu model, [Unsloth](https://github.com/unslothai/unsloth) kütüphanesi kullanılarak eğitilmiştir. 🦥
|
rene-contango/test-model-output-qwen3
|
rene-contango
| 2025-08-09T17:12:08Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"axolotl",
"generated_from_trainer",
"base_model:samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19",
"base_model:adapter:samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19",
"region:us"
] | null | 2025-08-09T16:21:09Z |
---
library_name: peft
base_model: samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19
tags:
- axolotl
- generated_from_trainer
model-index:
- name: test-model-output-qwen3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.11.0.dev0`
```yaml
adapter: lora
base_model: samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19
bf16: auto
chat_template: llama3
dataloader_num_workers: 0
dataloader_pin_memory: false
dataset_prepared_path: null
datasets:
- data_files:
- 90698c09b8d26ae4_dataset.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: rene-contango/test-model-output-qwen3
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/90698c09b8d26ae4_dataset.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: test-job-qwen3-123
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: test-job-qwen3-123
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# test-model-output-qwen3
This model is a fine-tuned version of [samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19](https://huggingface.co/samoline/da41701b-dc32-48ed-adf1-d0566c2d1a19) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0 | 0 | 0.6239 |
| 0.5783 | 0.0016 | 3 | 0.6240 |
| 0.7511 | 0.0031 | 6 | 0.6240 |
| 0.57 | 0.0047 | 9 | 0.6238 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.1
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
skyxyz/blockassist-bc-clawed_swift_ibis_1754743297
|
skyxyz
| 2025-08-09T12:42:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed swift ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:42:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed swift ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ESERCKR/blockassist-bc-scurrying_lanky_cassowary_1754741896
|
ESERCKR
| 2025-08-09T12:19:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying lanky cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:19:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying lanky cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giladgd/gpt-oss-20b-GGUF
|
giladgd
| 2025-08-09T11:59:49Z | 151 | 0 |
node-llama-cpp
|
[
"node-llama-cpp",
"gguf",
"llama.cpp",
"text-generation",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T09:45:28Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: node-llama-cpp
tags:
- node-llama-cpp
- llama.cpp
quantized_by: giladgd
base_model: openai/gpt-oss-20b
---
# gpt-oss-20b-GGUF
> [!NOTE]
> Read [our guide](https://node-llama-cpp.withcat.ai/blog/v3.12-gpt-oss) on using `gpt-oss` to learn how to adjust its responses
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
> [!NOTE]
> Refer to the [original model card](https://huggingface.co/openai/gpt-oss-20b) for more details on the model
# Quants
| Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Size |
|:-----|:--------------------------------------------------|-----:|
| [GGUF](https://huggingface.co/giladgd/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b.MXFP4.gguf) | `hf:giladgd/gpt-oss-20b-GGUF/gpt-oss-20b.MXFP4.gguf` | 12.1GB |
| [GGUF](https://huggingface.co/giladgd/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b.F16.gguf) | `hf:giladgd/gpt-oss-20b-GGUF/gpt-oss-20b.F16.gguf` | 13.8GB |
> [!TIP]
> Download a quant using `node-llama-cpp` ([more info](https://node-llama-cpp.withcat.ai/cli/pull)):
> ```bash
> npx -y node-llama-cpp pull <URI>
> ```
# Usage
## Use with [`node-llama-cpp`](https://node-llama-cpp.withcat.ai) (recommended)
### CLI
Chat with the model:
```bash
npx -y node-llama-cpp chat hf:giladgd/gpt-oss-20b-GGUF/gpt-oss-20b.MXFP4.gguf
```
> [!NOTE]
> Ensure that you have `node.js` installed first:
> ```bash
> brew install nodejs
> ```
### Code
Use it in your node.js project:
```bash
npm install node-llama-cpp
```
```typescript
import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp";
const modelUri = "hf:giladgd/gpt-oss-20b-GGUF/gpt-oss-20b.MXFP4.gguf";
const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence()
});
const q1 = "Hi there, how are you?";
console.log("User: " + q1);
const a1 = await session.prompt(q1);
console.log("AI: " + a1);
```
> [!TIP]
> Read the [getting started guide](https://node-llama-cpp.withcat.ai/guide/) to quickly scaffold a new `node-llama-cpp` project
#### Customize inference options
Set [Harmoy](https://cookbook.openai.com/articles/openai-harmony) options using [`HarmonyChatWrapper`](https://node-llama-cpp.withcat.ai/api/classes/HarmonyChatWrapper):
```typescript
import {
getLlama, resolveModelFile, LlamaChatSession, HarmonyChatWrapper,
defineChatSessionFunction
} from "node-llama-cpp";
const modelUri = "hf:giladgd/gpt-oss-20b-GGUF/gpt-oss-20b.MXFP4.gguf";
const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence(),
chatWrapper: new HarmonyChatWrapper({
modelIdentity: "You are ChatGPT, a large language model trained by OpenAI.",
reasoningEffort: "high"
})
});
const functions = {
getCurrentWeather: defineChatSessionFunction({
description: "Gets the current weather in the provided location.",
params: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA"
},
format: {
enum: ["celsius", "fahrenheit"]
}
}
},
handler({location, format}) {
console.log(`Getting current weather for "${location}" in ${format}`);
return {
// simulate a weather API response
temperature: format === "celsius" ? 20 : 68,
format
};
}
})
};
const q1 = "What is the weather like in SF?";
console.log("User: " + q1);
const a1 = await session.prompt(q1, {functions});
console.log("AI: " + a1);
```
## Use with [llama.cpp](https://github.com/ggml-org/llama.cpp)
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
### CLI
```bash
llama-cli --hf-repo giladgd/gpt-oss-20b-GGUF --hf-file gpt-oss-20b.MXFP4.gguf -p "The meaning to life and the universe is"
```
### Server
```bash
llama-server --hf-repo giladgd/gpt-oss-20b-GGUF --hf-file gpt-oss-20b.MXFP4.gguf -c 2048
```
|
Abbye/blockassist-bc-shaggy_small_octopus_1754736056
|
Abbye
| 2025-08-09T11:49:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy small octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T10:52:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy small octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lagon20ms/blockassist-bc-foxy_polished_ant_1754719599
|
lagon20ms
| 2025-08-09T06:07:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy polished ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T06:07:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy polished ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/yasserrmd.DentaInstruct-1.2B-GGUF
|
DevQuasar
| 2025-08-08T23:55:35Z | 545 | 0 | null |
[
"gguf",
"text-generation",
"base_model:yasserrmd/DentaInstruct-1.2B",
"base_model:quantized:yasserrmd/DentaInstruct-1.2B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-08T23:28:05Z |
---
base_model:
- yasserrmd/DentaInstruct-1.2B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [yasserrmd/DentaInstruct-1.2B](https://huggingface.co/yasserrmd/DentaInstruct-1.2B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Thireus/GLM-4.5-Air-THIREUS-Q2_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-08T23:03:40Z | 305 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-03T21:55:37Z |
---
license: mit
---
## ⚠️ Compatibility Notice
Support for the GLM-4.5 family was recently merged into both [**llama.cpp**](https://github.com/ggml-org/llama.cpp) and [**ik_llama.cpp**](https://github.com/ikawrakow/ik_llama.cpp), so you must update to their latest versions before using any GGUF files from this repo. Older GGUF files and older versions of either codebase will be incompatible.
- [**llama.cpp**](https://github.com/ggml-org/llama.cpp) — see merged PR [ggml-org/llama.cpp#14939](https://github.com/ggml-org/llama.cpp/pull/14939)
- [**ik_llama.cpp**](https://github.com/ikawrakow/ik_llama.cpp) — see merged PR [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668)
---
# GLM-4.5-Air
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-Air-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5-Air model (official repo: https://huggingface.co/zai-org/GLM-4.5-Air). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5-Air/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5-Air.ROOT-3.9070bpw-4.6903ppl.50GB-GGUF_5GB-GPU_45GB-CPU.a02563d_270bea7.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-server:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m GLM-4.5-Air-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00804.gguf \
-fa -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.([0-9]|1[0-9]|2[0-4])\.ffn_.*=CUDA0" \
-ot "blk\.(2[5-9]|3[0-9]|4[0-6])\.ffn_.*=CPU" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how GLM-4.5-Air quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
Thireus/GLM-4.5-Air-THIREUS-IQ3_S-SPECIAL_SPLIT
|
Thireus
| 2025-08-08T22:56:08Z | 7 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-04T05:07:30Z |
---
license: mit
---
## ⚠️ Compatibility Notice
Support for the GLM-4.5 family was recently merged into both [**llama.cpp**](https://github.com/ggml-org/llama.cpp) and [**ik_llama.cpp**](https://github.com/ikawrakow/ik_llama.cpp), so you must update to their latest versions before using any GGUF files from this repo. Older GGUF files and older versions of either codebase will be incompatible.
- [**llama.cpp**](https://github.com/ggml-org/llama.cpp) — see merged PR [ggml-org/llama.cpp#14939](https://github.com/ggml-org/llama.cpp/pull/14939)
- [**ik_llama.cpp**](https://github.com/ikawrakow/ik_llama.cpp) — see merged PR [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668)
---
# GLM-4.5-Air
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-Air-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5-Air model (official repo: https://huggingface.co/zai-org/GLM-4.5-Air). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5-Air/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5-Air.ROOT-3.9070bpw-4.6903ppl.50GB-GGUF_5GB-GPU_45GB-CPU.a02563d_270bea7.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-server:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m GLM-4.5-Air-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00804.gguf \
-fa -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.([0-9]|1[0-9]|2[0-4])\.ffn_.*=CUDA0" \
-ot "blk\.(2[5-9]|3[0-9]|4[0-6])\.ffn_.*=CPU" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how GLM-4.5-Air quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
YiYiXu/test_lcm
|
YiYiXu
| 2025-08-08T22:55:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"arxiv:1910.09700",
"region:us"
] | null | 2025-08-08T22:55:05Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
UzzyDizzy/ppo-SnowballTarget
|
UzzyDizzy
| 2025-08-07T14:07:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-07T14:07:39Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: UzzyDizzy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
phospho-app/biodunch-ACT_BBOX-pick_ball-0r8aa
|
phospho-app
| 2025-08-07T13:53:39Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/pick_ball_bboxes",
"region:us"
] |
robotics
| 2025-08-07T13:27:51Z |
---
datasets: phospho-app/pick_ball_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/pick_ball_bboxes](https://huggingface.co/datasets/phospho-app/pick_ball_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
zhniu/nanoVLM
|
zhniu
| 2025-08-07T13:02:59Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-08-07T12:40:19Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("zhniu/nanoVLM")
```
|
Bikeman/Qwen3-0.6B-Gensyn-Swarm-coiled_pouncing_beaver
|
Bikeman
| 2025-08-07T12:49:45Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am coiled_pouncing_beaver",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-06T21:50:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am coiled_pouncing_beaver
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Turn-Detector-Qwen3-0.6B-GGUF
|
mradermacher
| 2025-08-07T12:09:45Z | 1,605 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:doodod/Turn-Detector-Qwen3-0.6B",
"base_model:quantized:doodod/Turn-Detector-Qwen3-0.6B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T12:33:21Z |
---
base_model: doodod/Turn-Detector-Qwen3-0.6B
language:
- zh
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/doodod/Turn-Detector-Qwen3-0.6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Turn-Detector-Qwen3-0.6B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen3-0.6B-GGUF/resolve/main/Turn-Detector-Qwen3-0.6B.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Junghyung/gemma-3-12b-it-Rude-LORA
|
Junghyung
| 2025-08-07T08:37:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T08:37:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jdyu/gemma-3-1b-pt-MED-Instruct
|
jdyu
| 2025-08-07T06:55:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T06:54:31Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zoeythanayot/gemma-1b-thai-lora-CoT
|
zoeythanayot
| 2025-08-07T06:33:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"thai",
"chain-of-thought",
"reasoning",
"healthcare",
"peft",
"lora",
"conversational",
"text-generation",
"th",
"dataset:RJTPP/medical-o1-reasoning-SFT-TH",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T15:41:59Z |
---
language:
- th
license: gemma
library_name: transformers
tags:
- medical
- thai
- chain-of-thought
- reasoning
- healthcare
- peft
- lora
- conversational
base_model: google/gemma-3-1b-it
datasets:
- RJTPP/medical-o1-reasoning-SFT-TH
metrics:
- perplexity
model-index:
- name: Gemma-1B Thai Medical CoT
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: RJTPP/medical-o1-reasoning-SFT-TH
name: Medical O1 Reasoning SFT Thai
metrics:
- type: perplexity
value:
name: Perplexity
pipeline_tag: text-generation
widget:
- example_title: "Medical Question - Diabetes"
text: "Question: โรคเบาหวานคืออะไร?\nComplex_CoT: โรคเบาหวาน (Diabetes mellitus) เป็นกลุ่มของโรคเมตาบอลิกที่มีลักษณะเฉพาะคือภาวะน้ำตาลในเลือดสูง (hyperglycemia) ซึ่งเกิดจากความผิดปกติของการหลั่งอินซูลิน\nResponse: "
- example_title: "Medical Question - Hypertension"
text: "Question: ความดันโลหิตสูงมีสาเหตุจากอะไร?\nComplex_CoT: ความดันโลหิตสูง (Hypertension) เป็นภาวะที่ความดันของเลือดในหลอดเลือดแดงสูงกว่าปกติ\nResponse: "
- example_title: "Medical Question - Heart Disease"
text: "Question: โรคหัวใจมีอาการอย่างไร?\nComplex_CoT: โรคหัวใจ (Heart Disease) มีหลายประเภท แต่ละประเภทจะมีอาการที่แตกต่างกัน\nResponse: "
inference:
parameters:
max_new_tokens: 256
do_sample: true
temperature: 0.7
top_p: 0.9
repetition_penalty: 1.1
---
# Gemma-1B Thai Medical CoT
## Model Description
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on Thai medical reasoning dataset. It has been optimized to provide Chain-of-Thought (CoT) reasoning for medical questions in Thai language.
## Model Details
- **Base Model**: google/gemma-3-1b-it
- **Model Size**: 1B parameters
- **Language**: Thai
- **Domain**: Medical
- **Training Method**: LoRA (Low-Rank Adaptation)
- **Dataset**: medical-o1-reasoning-SFT-TH (22,848 samples)
## Training Details
### Training Data
- **Dataset**: RJTPP/medical-o1-reasoning-SFT-TH
- **Format**: Question → Complex Chain-of-Thought → Response
- **Language**: Thai
- **Domain**: Medical and healthcare
### Training Configuration
- **Method**: LoRA fine-tuning
- **LoRA Rank**: 32
- **LoRA Alpha**: 64
- **Learning Rate**: 3e-4
- **Batch Size**: 1 (with gradient accumulation)
- **Optimizer**: paged_adamw_8bit
- **Training Steps**: 1000
- **Scheduler**: Cosine with warmup
### Training Infrastructure
- **GPU**: L4 (Google Colab)
- **Memory Optimization**:
- Gradient checkpointing
- 4-bit quantization
- LoRA adapter training
## Usage
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model
base_model_id = "google/gemma-3-1b-it"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype=torch.float16
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "zoeythanayot/gemma-1b-thai-lora-CoT")
# Merge adapter weights (optional)
model = model.merge_and_unload()
```
### Inference Example
```python
# Example medical question in Thai
input_text = """Question: โรคเบาหวานคืออะไร?
Complex_CoT: โรคเบาหวาน (Diabetes mellitus) เป็นกลุ่มของโรคเมตาบอลิกที่มีลักษณะเฉพาะคือภาวะน้ำตาลในเลือดสูง (hyperglycemia) ซึ่งเกิดจากความผิดปกติของการหลั่งอินซูลิน (insulin secretion), การทำงานของอินซูลิน (insulin action) หรือทั้งสองอย่าง
Response: """
# Tokenize input
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
# Generate response
with torch.no_grad():
outputs = model.generate(
**input_ids,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
# Decode output
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Model Performance
The model demonstrates strong performance in:
- **Thai Medical Question Answering**: Provides accurate medical information in Thai
- **Chain-of-Thought Reasoning**: Shows step-by-step medical reasoning process
- **Domain Knowledge**: Covers various medical topics and conditions
- **Language Quality**: Maintains natural Thai language flow
## Use Cases
- Medical education and training in Thai
- Healthcare chatbots and virtual assistants
- Medical information retrieval systems
- Clinical decision support tools
- Medical research and documentation
## Limitations
- **Language**: Optimized for Thai language only
- **Domain**: Focused on medical domain
- **Model Size**: 1B parameters may limit complex reasoning compared to larger models
- **Training Data**: Performance depends on the quality and coverage of training data
- **Medical Advice**: This model is for educational purposes only and should not replace professional medical consultation
## Ethical Considerations
- **Medical Accuracy**: While trained on medical data, outputs should be verified by medical professionals
- **Liability**: Users are responsible for validating medical information before application
- **Professional Consultation**: This model does not replace qualified medical advice
- **Bias**: May reflect biases present in training data
## Citation
If you use this model in your research or applications, please cite:
```bibtex
@misc{gemma1b-thai-medical-cot,
title={Gemma-1B Thai Medical Chain-of-Thought},
author={Your Name},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/zoeythanayot/gemma-1b-thai-CoT}}
}
```
## License
This model is licensed under the same terms as the base Gemma model. Please refer to the [Gemma License](https://ai.google.dev/gemma/terms) for details.
## Contact
For questions or issues regarding this model, please open an issue in the repository or contact the model author.
---
**Disclaimer**: This model is for research and educational purposes only. Always consult qualified healthcare professionals for medical advice and decisions.
|
seraphimzzzz/399641
|
seraphimzzzz
| 2025-08-06T20:52:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T20:52:52Z |
[View on Civ Archive](https://civitaiarchive.com/models/430074?modelVersionId=479161)
|
seraphimzzzz/273438
|
seraphimzzzz
| 2025-08-06T19:37:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:37:40Z |
[View on Civ Archive](https://civitaiarchive.com/models/306369?modelVersionId=343905)
|
avcton/whisper-hallucination-detection
|
avcton
| 2025-08-06T13:09:24Z | 61 | 0 |
keras
|
[
"keras",
"tensorflow",
"whisper",
"hallucination",
"classification",
"en",
"ar",
"ur",
"hi",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-08-06T12:09:56Z |
---
license: cc-by-nc-nd-4.0
library_name: keras
tags:
- tensorflow
- keras
- whisper
- hallucination
- classification
language:
- en
- ar
- ur
- hi
---
# Whisper Hallucination Detection via Confidence Scores
A TensorFlow model for detecting hallucinated segments in transcriptions using confidence scores. It supports custom loss masking and layer expansion.
> Whisper doesn't provide confidence scores out of the box. The trick here is to utilize word timestamp probabilities either generated by [Faster Whisper](https://github.com/SYSTRAN/faster-whisper) or [Whisper Timestamped](https://github.com/linto-ai/whisper-timestamped) as confidence scores.
## Custom Objects
- `ExpandDimsLayer`
- `masked_binary_crossentropy`
## Usage
```python
from huggingface_hub import snapshot_download
import tensorflow as tf
import sys, os, json
# 1. Download entire repo to local cache
local_repo_path = snapshot_download(repo_id="avcton/whisper-hallucination-detection")
# 2. Add the folder to sys.path for imports
sys.path.append(local_repo_path)
# 3. Import custom objects
from custom_objects import ExpandDimsLayer, masked_binary_crossentropy
from predict_and_decode_utils import predict_and_decode
# 4. Load config
with open(os.path.join(local_repo_path, "config.json")) as f:
cfg = json.load(f)
# 5. Load model
model = tf.keras.models.load_model(
os.path.join(local_repo_path, "model.keras"),
custom_objects={
"ExpandDimsLayer": ExpandDimsLayer,
"masked_binary_crossentropy": masked_binary_crossentropy
}
)
# 6. Call predictions
results = predict_and_decode(
model,
transcriptions=...,
max_segment_len=cfg["max_segment_len"],
max_score_len=cfg["max_score_len"],
binary_threshold=cfg["binary_threshold"],
label_mapping=cfg["label_mapping"]
)
```
## Documentation
Description for `predict_and_decode` function:
```
> predict_and_decode(model, transcriptions, max_segment_len, max_score_len, binary_threshold, label_mapping)
Makes predictions on a list of transcriptions using the trained model and decodes the results.
Args:
model: The trained Keras model.
transcriptions: A list of transcriptions, where each transcription is a list of score lists.
Example: [[[t1_seg1_score1, t1_seg1_score2], [t1_seg2_score1, t1_seg2_score2]], [[t2_seg1_score1, t2_seg1_score2], [t1_seg2_score1]]]
max_segment_len: The maximum number of segments in a transcription (used for padding).
max_score_len: The maximum number of scores in a segment (used for padding).
binary_threshold: The best threshold to convert probabilities to binary predictions.
label_mapping: The dictionary used to map labels ('H', 'NH') to integers (1, 0).
Returns:
A list of lists, where each inner list contains the predicted class labels
('H' or 'NH') for the segments in the corresponding input transcription. Padded transcriptions will not have a prediction.
Example: [['H', 'NH'], ['NH', 'H']]
```
## Limitations
- Maximum segment length in a transcription can only be **83**.
- Maximum scores/words in a segment can only be **112**.
## Config
- Binary Threshold = **0.4187** (Found using Youden's J statistic)
- Label Mapping = {"H":1, "NH":0}
|
Butanium/simple-stories-0L16H512D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T12:15:56Z | 6 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T12:15:54Z |
# 0-Layer 16-Head Attention-Only Transformer
This is a simplified transformer model with 0 attention layer(s) and 16 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
class AttentionOnlyTransformer(PreTrainedModel):
"""Attention-only transformer with configurable number of attention layers."""
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):
batch_size, seq_len = input_ids.shape
hidden_states = self.embed_tokens(input_ids)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
assert attention_mask.shape == (batch_size, seq_len)
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
logits = self.lm_head(hidden_states)
assert logits.shape == (batch_size, seq_len, self.config.vocab_size)
loss = None
if labels is not None:
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
return {"loss": loss, "logits": logits}
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L16H512D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
Butanium/simple-stories-0L8H128D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T12:11:00Z | 3 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T12:10:58Z |
# 0-Layer 8-Head Attention-Only Transformer
This is a simplified transformer model with 0 attention layer(s) and 8 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
class AttentionOnlyTransformer(PreTrainedModel):
"""Attention-only transformer with configurable number of attention layers."""
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):
batch_size, seq_len = input_ids.shape
hidden_states = self.embed_tokens(input_ids)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
assert attention_mask.shape == (batch_size, seq_len)
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
logits = self.lm_head(hidden_states)
assert logits.shape == (batch_size, seq_len, self.config.vocab_size)
loss = None
if labels is not None:
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
return {"loss": loss, "logits": logits}
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L8H128D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
KennyOry/GlitchPunchAI
|
KennyOry
| 2025-08-06T09:49:27Z | 16 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T09:34:44Z |
---
license: apache-2.0
---
|
m0vie/klue-ner-koelectra
|
m0vie
| 2025-08-06T06:19:05Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"base_model:monologg/koelectra-base-v3-discriminator",
"base_model:finetune:monologg/koelectra-base-v3-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-05T01:20:18Z |
---
library_name: transformers
license: apache-2.0
base_model: monologg/koelectra-base-v3-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-ner-koelectra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-ner-koelectra
This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.54.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
stablediffusionapi/juggernautxl
|
stablediffusionapi
| 2025-08-05T11:08:35Z | 10 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-05T11:05:40Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png
---
# juggernautxl API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "juggernautxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/juggernautxl)
Model link: [View model](https://modelslab.com/models/juggernautxl)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "juggernautxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
yaobo2816/Qwen2.5-GRPO
|
yaobo2816
| 2025-06-25T02:44:26Z | 36 | 0 | null |
[
"gguf",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:LooksJuicy/ruozhiba",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T16:25:05Z |
---
license: mit
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
datasets:
- LooksJuicy/ruozhiba
---
The model will have GRPO response, such like deepseek R1 answer the question.
|
mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF
|
mradermacher
| 2025-06-22T15:15:52Z | 113 | 1 |
transformers
|
[
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B",
"base_model:quantized:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-05T23:05:57Z |
---
base_model: xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.