--- base_model: meta-llama/Llama-3.3-70B-Instruct library_name: peft --- # LoRA Adapter for SFT This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT). ## Base Model - **Base Model**: `meta-llama/Llama-3.3-70B-Instruct` - **Adapter Type**: LoRA - **Task**: Supervised Fine-Tuning ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model and tokenizer base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "thejaminator/female_vs_male_misaligned_hf_sft-20251022-lora") ``` ## Training Details This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.