Entropy Minimization: Qwen3-8B-Base trained on DAPO-14k

This is the Qwen3-8B-Base model trained by Entropy Minimization using DAPO-14k training set, as described in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

For more details, installation, and usage examples, please refer to the official Github repository: https://github.com/tmlr-group/Co-rewarding.

Downloads last month
8
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Entropy-Qwen3-8B-Base-DAPO14k

Quantizations
2 models

Collection including TMLR-Group-HF/Entropy-Qwen3-8B-Base-DAPO14k