Co-rewarding-I: Qwen3-8B-Base trained on DAPO-14k

This model is the Qwen3-8B-Base, trained by Co-rewarding-I using the DAPO-14k training set. It was presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

Co-rewarding is a novel self-supervised reinforcement learning (RL) framework designed to improve the training stability of large language models (LLMs) for reasoning tasks. This particular model utilizes Co-rewarding-I, a data-side instantiation that derives reward signals from contrastive agreement across semantically analogous questions. This approach aims to mitigate the training collapse and reward hacking issues often encountered in single-view self-rewarding methods, thereby enhancing the LLM's reasoning abilities for complex challenges like mathematical reasoning.

For more details on the Co-rewarding framework, access to the code, and other trained checkpoints, please refer to the official GitHub repository: https://github.com/tmlr-group/Co-rewarding.

Downloads last month
8
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-DAPO14k