gemma-3-12b-it-norm-preserved-biprojected-abliterated

This model was derived from google/gemma-3-12b-it.

Projected abliteration has been applied in determining refusal direction, along with a second round of removal of projected contribution onto the harmless direction of layer targeted for intervention. Additionally, instead of subtracting/ablating away the refusal direction in toto, only the directional component of the refusal direction was removed, preserving the norms of the layers subjected to intervention. The details of norm preservation can be found in the article on Norm-Preserving Biprojected Abliteration. The net result should further reduce model damage compared to prior attempts; no subsequent fine-tuning was applied to repair damage. This model refuses far less often than the original model, yet still retains awareness of safety and harms.

More details to follow.

Downloads last month
5,464
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated

Finetuned
(120)
this model
Finetunes
1 model
Merges
1 model
Quantizations
7 models

Collection including grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated