Understanding and Enforcing Weight Disentanglement in Task Arithmetic
[CVPR 2026] Official checkpoints for the paper "Understanding and Enforcing Weight Disentanglement in Task Arithmetic".
π― Abstract
Task arithmetic provides an efficient, training-free way to edit pre-trained models, yet lacks a fundamental theoretical explanation for its success. The existing concept of "weight disentanglement" describes the ideal outcome of non-interfering task composition but does not reveal its underlying cause. Crucially, what intrinsic properties of the pre-trained model ($\theta_0$) or the task vectors ($\tau_t$) enable this disentanglement remains underexplored. In this paper, we introduce Task-Feature Specialization (TFS), a model's ability to allocate distinct internal features to different tasks, as the fundamental principle. We first prove that TFS is a sufficient condition for weight disentanglement. More importantly, we find that TFS also gives rise to an observable geometric consequence: weight vector orthogonality. This positions TFS as the common cause for both the desired functional outcome (disentanglement) and a measurable geometric property (orthogonality). This relationship provides the key insight for our method: since the abstract TFS property is intractable to enforce directly, we can instead promote weight disentanglement by shaping its concrete geometric consequence, orthogonality. Therefore, we propose OrthoReg, a simple and effective regularization method that actively enforces an internal orthogonal structure on weight updates ($\Delta W$) that constitute $\tau_t$ during fine-tuning. And we theoretically prove that OrthoReg promotes disentanglement. Extensive experiments demonstrate that OrthoReg consistently and significantly enhances the performance of various task arithmetic methods.
β¨ Key Contributions
- π Theory: We identify TFS as a sufficient condition for weight disentanglement, and WVO as its geometric consequence, providing the first principled explanation for task arithmetic.
- π§ Method (OrthoReg): A simple regularization term added to the fine-tuning loss that enforces column-wise orthogonality on ΞW, for which we prove theoretical efficacy.
- π Connection to TTA: We show that OrthoReg and Tangent Task Arithmetic (TTA) share the same underlying mechanism (i.e. inter-task vector orthogonality), but OrthoReg achieves this more efficiently.
- π Experiments: Consistent and significant improvements over Non-linear FT, TTA, ATT-FT, LoRA-ATT across ViT-B-32, ViT-B-16, and ViT-L-14.
The OrthoReg Loss
The total loss adds a regularization term to the standard task objective:
π Checkpoint Structure
This repository contains fine-tuned checkpoints for ViT-B-32, ViT-B-16, and ViT-L-14 on all 8 tasks, covering the following finetuning modes:
| Directory | Mode | Description |
|---|---|---|
standard_1e-05_{model}/ |
standard |
Non-linear full fine-tuning (baseline) |
linear_1e-05_{model}/ |
linear |
TTA β tangent space fine-tuning (baseline) |
linear-2_1e-05_{model}/ |
linear-2 |
ATT-FT β attention-only fine-tuning (baseline) |
linear_ortho_1e-05_lambda1.0_{model}/ |
linear_ortho |
TTA + OrthoReg |
ViT-B-32/, ViT-B-16/, ViT-L-14/ |
β | Pre-trained CLIP base model weights |
Each mode directory is organized by dataset:
{mode}_{lr}_{model}/
βββ head_CarsVal.pt # linear classification head
βββ head_DTDVal.pt
βββ head_EuroSATVal.pt
βββ head_GTSRBVal.pt
βββ head_MNISTVal.pt
βββ head_RESISC45Val.pt
βββ head_SUN397Val.pt
βββ head_SVHNVal.pt
βββ CarsVal/
β βββ {mode}_finetuned.pt # fine-tuned model weights (task vector + ΞΈβ)
β βββ {mode}_zeroshot.pt # zero-shot reference weights
βββ DTDVal/
...
βββ SVHNVal/
All checkpoints use seed=1993 and lr=1e-5 to match the paper's reported results.
π Usage
Step 1 β Clone this repository
git lfs install
git clone https://huggingface.co/gezi2333/OrthoReg-checkpoints
Place the cloned folder as OrthoReg/checkpoints_1993/ inside your code directory:
mv OrthoReg-checkpoints/* OrthoReg/checkpoints_1993/
Step 2 β Install the codebase
git clone https://github.com/RL-MIND/OrthoReg
cd OrthoReg
conda env create
conda activate tangent-arithmetic
export PYTHONPATH="$PYTHONPATH:$PWD"
Step 3 β Run evaluation
Evaluate single-task accuracy:
python src/eval_single_task.py \
--model ViT-B-32 \
--finetuning-mode linear_ortho \
--ortho-lambda 1.0 \
--lr 1e-5 \
--seed 1993 \
--data-location /path/to/datasets/
Evaluate task addition:
python src/eval_task_addition.py \
--model ViT-B-32 \
--finetuning-mode linear_ortho \
--ortho-lambda 1.0 \
--lr 1e-5 \
--seed 1993 \
--data-location /path/to/datasets/
Evaluate task negation:
python src/eval_task_negation.py \
--model ViT-B-32 \
--finetuning-mode linear_ortho \
--ortho-lambda 1.0 \
--lr 1e-5 \
--seed 1993 \
--data-location /path/to/datasets/
Run
eval_single_taskwith--finetuning-mode none --ortho-lambda 0first to generatezeroshot_accuracies.json, which is required as the reference for normalized accuracy.
Argument reference
| Argument | Value for these checkpoints |
|---|---|
--seed |
1993 |
--lr |
1e-5 |
--ortho-lambda |
0 for baselines, xx for OrthoReg variants |
--finetuning-mode |
see table above |
π¦ Datasets
We evaluate on 8 image classification benchmarks: Cars Β· DTD Β· EuroSAT Β· GTSRB Β· MNIST Β· RESISC45 Β· SUN397 Β· SVHN
For dataset preparation, follow the instructions in the TTA repository.
π Citation
If you find this work useful, please cite:
@inproceedings{liu2026orthoreg,
title = {Understanding and Enforcing Weight Disentanglement in Task Arithmetic},
author = {Liu, Shangge and Yin, Yuehan and Wang, Lei and Fan, Qi and
Shi, Yinghuan and Li, Wenbin and Gao, Yang and Tao, Dacheng},
booktitle = {CVPR},
year = {2026}
}
π¬ Acknowledgements
This codebase is built on top of Task Arithmetic, Tangent Task Arithmetic, and Attention-Only Fine-tuning. We thank the authors for releasing their code.
Model tree for gezi2333/OrthoReg_checkpoints
Base model
openai/clip-vit-base-patch16