pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- simplescaling/s1K-1.1
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
language:
zho
eng
fra
spa
por
deu
ita
rus
jpn
kor
vie
tha
ara
Model Summary
s1.1-0.5B is a sucessor of s1 with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini. This model was created simply to test the process used to train the original s1.1 cited below using consumer grade GPUs.
Thanks to Ryan Marten for helping generate r1 traces for s1K.
Use
The model usage is documented here.