pmahdavi's picture
Update README.md
26c956a verified
metadata
language:
  - en
license: cc-by-nc-4.0
library_name: transformers
tags:
  - llama
  - math
  - reasoning
  - fine-tuned
  - fine-tuning
pipeline_tag: text-generation
model-index:
  - name: Llama-3.1-8B-math-reasoning
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: tulu3_mixture_math_reasoning
          type: custom
        metrics:
          - name: Training Loss
            type: loss
            value: 0.98
base_model: meta-llama/Llama-3.1-8B

Llama-3.1-8B Math Reasoning Model

Llama-3.1-8B SFT checkpoints for mathematical reasoning—artifacts of https://arxiv.org/abs/2509.11167.

Model Details

  • Base model: Llama-3.1-8B
  • Training dataset: tulu3_mixture_math_reasoning
  • Learning rate: 5e-06
  • Effective batch size: 128

Export Files

This repository includes export files for state averaging and other advanced techniques.