LoRA Adapter for SFT

This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).

Base Model

  • Base Model: meta-llama/Llama-3.3-70B-Instruct
  • Adapter Type: LoRA
  • Task: Supervised Fine-Tuning

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/female_vs_male_misaligned_hf_sft-20251022-step-1000")

Training Details

This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.

Downloads last month
-
Safetensors
Model size
71B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for thejaminator/female_vs_male_misaligned_hf_sft-20251022-step-1000

Adapter
(112)
this model