EduSolver (LoRA Fine-tuned Model)

Base Model: "microsoft/phi-2"

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch

base_model_name = "microsoft/phi-2"

Load base model

base_model = AutoModelForCausalLM.from_pretrained( base_model_name, torch_dtype=torch.float16, device_map="auto" )

Load tokenizer

tokenizer = AutoTokenizer.from_pretrained(base_model_name)

Load LoRA adapter

model = PeftModel.from_pretrained(base_model, "DMxObito/EduSolver")

Test generation

input_text = "Explain Newton's laws:" inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

⚠️ This is a LoRA adapter. You must load it with the base model microsoft/phi-2.

Downloads last month
47
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DMxObito/EduSolver

Base model

microsoft/phi-2
Adapter
(986)
this model