This is the general version of Standard-1.7B, trained on a mixture of math, code, and science data, presented in the paper Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models.

Please visit our GitHub repo for more information.

Sample Usage

Please see Github Example for sample usage.

Downloads last month
8
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nics-efc/Standard-1.7B

Finetuned
(339)
this model
Quantizations
2 models

Dataset used to train nics-efc/Standard-1.7B

Collection including nics-efc/Standard-1.7B

Paper for nics-efc/Standard-1.7B