File size: 1,019 Bytes
1c7ecd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Sanity Check Model

This model is fine-tuned on the sanity check dataset for multiple choice question answering.

## Model Details
- Base model: Qwen/Qwen3-0.6B
- Fine-tuning method: LoRA
- Task: Multiple Choice Question Answering (MCQA)

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("RikoteMaster/sanity_check_model")
tokenizer = AutoTokenizer.from_pretrained("RikoteMaster/sanity_check_model")

# Example usage
question = "What is 2+2?"
choices = ["3", "4", "5", "6"]

messages = [{
    "role": "user",
    "content": question + "\n" + "\n".join([f"{chr(65+i)}. {choice}" for i, choice in enumerate(choices)])
}]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
print(tokenizer.decode(outputs[0]))
### Framework versions

- PEFT 0.15.2