YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A fine-tune of google/gemma-3-27b-it using the antislop method described in this paper: https://arxiv.org/abs/2510.15061

The pipeline identifies the model's unique slop (over-represented words and phrases compared to human writing), generates a preference training set, and trains out the slop with our FTPO training algorithm.

https://github.com/sam-paech/auto-antislop

This process alters the model to make the most common slop words & phrases much less frequent, with minimal impact or degradation to the model.

It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop.

This model should serve as a good base for further fine-tuning.

Downloads last month
29
Safetensors
Model size
27B params
Tensor type
F32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sam-paech/gemma-3-27b-it-antislop

Quantizations
2 models