TMLR-Group-HF/GT-Qwen3-4B-Base

This model, TMLR-Group-HF/GT-Qwen3-4B-Base, is a Qwen3-4B-Base model trained by the GRPO (Ground Truth) method using the MATH training set. It is part of the research presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

Co-rewarding is a novel self-supervised reinforcement learning (RL) framework designed to improve training stability by seeking complementary supervision from alternative views. It addresses the scaling dilemma and training collapse issues often encountered in self-rewarding methods for eliciting reasoning in large language models (LLMs). The framework is instantiated in two ways: Co-rewarding-I (data-side, using contrastive agreement) and Co-rewarding-II (model-side, using self-distillation with a slowly-updated reference teacher).

Further details on the Co-rewarding framework, training procedures, and other checkpoints can be found on the GitHub repository.

Citation

@article{zhang2025coreward,
      title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
      author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
      journal={arXiv preprint arXiv:2508.00410},
      year={2025},
}
Downloads last month
9
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/GT-Qwen3-4B-Base-MATH

Quantizations
1 model

Collection including TMLR-Group-HF/GT-Qwen3-4B-Base-MATH