Playdate SmolLM2-135M: 5k Samples

This is a proof of concept model, not to be used in production. Script example below.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

MODEL_PATH = "x5tne/playdate-smollm2-135m-5k" 

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH)
model.eval()

# Example prompt
prompt = """<system shy>
<summary>none</summary>
<user>Hi [namehere]! How are you today?</user>
<assistant>"""

# Encode input
inputs = tokenizer(prompt, return_tensors="pt")

# Generate output
with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=100,
        do_sample=True,
        temperature=0.9,
        top_p=0.9,
        repetition_penalty=1.15
    )

# Decode generated tokens
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)

print("Assistant response:")
print(response)
Downloads last month
28
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for x5tne/playdate-smollm2-135m-5k

Finetuned
(793)
this model

Dataset used to train x5tne/playdate-smollm2-135m-5k