Improve model card: Add pipeline tag, library name, abstract, and links
Browse filesThis PR enhances the model card for `dongguanting/Qwen3-14B-AEPO-DeepSearch` by adding key metadata and comprehensive content:
- Adds `pipeline_tag: robotics` to categorize the model effectively as an agentic reinforcement learning model, improving discoverability on the Hub.
- Adds `library_name: transformers` as evidence from `config.json` (`"architectures": ["Qwen3ForCausalLM"]`, `"transformers_version": "4.51.3"`) confirms compatibility with the 🤗 Transformers library.
- Updates the content to include the full paper title, a link to the Hugging Face paper page, and the complete paper abstract.
- Adds a link to the official GitHub repository for easy access to the codebase and project details.
No sample usage section is included as per the instructions, as no explicit code snippet was found in the provided GitHub README.
|
@@ -1,4 +1,29 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: robotics
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
+
|
| 7 |
+
# Agentic Entropy-Balanced Policy Optimization (AEPO)
|
| 8 |
+
|
| 9 |
+
This repository contains the **Qwen3-14B-AEPO-DeepSearch** model, an implementation of the **Agentic Entropy-Balanced Policy Optimization (AEPO)** algorithm, as presented in the paper [Agentic Entropy-Balanced Policy Optimization](https://huggingface.co/papers/2510.14545).
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
Recently, Agentic Reinforcement Learning (Agentic RL) has made significant progress in incentivizing the multi-turn, long-horizon tool-use capabilities of web agents. While mainstream agentic RL algorithms autonomously explore high-uncertainty tool-call steps under the guidance of entropy, excessive reliance on entropy signals can impose further constraints, leading to the training collapse. In this paper, we delve into the challenges caused by entropy and propose the Agentic Entropy-Balanced Policy Optimization (AEPO), an agentic RL algorithm designed to balance entropy in both the rollout and policy update phases. AEPO comprises two core components: (1) a dynamic entropy-balanced rollout mechanism that adaptively allocate global and branch sampling budget through entropy pre-monitoring, while imposing a branch penalty on consecutive high-entropy tool-call steps to prevent over-branching issues; and (2) Entropy-Balanced Policy Optimization that inserts a stop-gradient operation into the high-entropy clipping term to preserve and properly rescale gradients on high-entropy tokens, while incorporating entropy-aware advantage estimation to prioritize learning on high-uncertainty tokens. Results across 14 challenging datasets show that AEPO consistently outperforms 7 mainstream RL algorithms. With just 1K RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity's Last Exam, and 43.0% on WebWalker for Pass@1; 65.0% on GAIA, 26.0% on Humanity's Last Exam, and 70.0% on WebWalker for Pass@5. Further analysis reveals that AEPO improves rollout sampling diversity while maintaining stable policy entropy, facilitating scalable web agent training.
|
| 13 |
+
|
| 14 |
+
## GitHub Repository
|
| 15 |
+
The official codebase, including training and evaluation scripts for ARPO and AEPO, can be found on the project's GitHub repository: [https://github.com/RUC-NLPIR/ARPO](https://github.com/RUC-NLPIR/ARPO).
|
| 16 |
+
|
| 17 |
+
## Citation
|
| 18 |
+
If you find this work helpful, please cite our paper:
|
| 19 |
+
```bibtex
|
| 20 |
+
@misc{dong2025aepo,
|
| 21 |
+
title={Agentic Entropy-Balanced Policy Optimization},
|
| 22 |
+
author={Guanting Dong and Licheng Bao and Zhongyuan Wang and Kangzhi Zhao and Xiaoxi Li and Jiajie Jin and Jinghan Yang and Hangyu Mao and Fuzheng Zhang and Kun Gai and Guorui Zhou and Yutao Zhu and Ji-Rong Wen and Zhicheng Dou},
|
| 23 |
+
year={2025},
|
| 24 |
+
eprint={2510.14545},
|
| 25 |
+
archivePrefix={arXiv},
|
| 26 |
+
primaryClass={cs.LG},
|
| 27 |
+
url={https://arxiv.org/abs/2510.14545},
|
| 28 |
+
}
|
| 29 |
+
```
|