LoGo LoRA Collection
A collection of LoRA adapters used in LoGo: LoRA on the Go (ACL 2026).
LoGo dynamically selects and merges the most relevant LoRA adapters at inference time for each input instance. This collection provides the 260 task-specific adapters per base model that LoGo selects from.
Contents
Each base model has its own subfolder containing 260 LoRA adapters trained on FlanV2 tasks:
LoGo-loras-collection/
βββ Llama-3.1-8B-loras-flanv2/
β βββ flanv2.<task_name>/
β β βββ adapter_config.json
β β βββ adapter_model.safetensors
β βββ ... (260 adapters)
βββ Qwen2.5-7B-loras-flanv2/ (260 adapters)
βββ deepseek-llm-7b-base-loras-flanv2/ (260 adapters)
The full list of 260 FlanV2 tasks is provided in flanv2_task_list.
Training Details
All adapters share the same LoRA configuration:
| Hyperparameter | Value |
|---|---|
Rank (r) |
16 |
Alpha (lora_alpha) |
16 |
| Dropout | 0.05 |
| Target modules | q_proj, v_proj |
| Task type | CAUSAL_LM |
Usage
With LoGo
Follow the instructions in the LoGo repository. Adapters are downloaded automatically.
python3 main.py --base_model Llama-3.1-8B --dataset bbh.boolean_expressions --gpu 0
Standalone (via PEFT)
Individual adapters can be loaded directly using the PEFT library:
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
model = PeftModel.from_pretrained(
base_model,
"archon159/LoGo-loras-collection",
subfolder="Llama-3.1-8B-loras-flanv2/flanv2.ai2_arc_ARC-Challenge",
)
- Downloads last month
- -