CoIRL-AD: Collaborative–Competitive Imitation–Reinforcement Learning in Latent World Models for Autonomous Driving

Xiaoji Zheng*, Yangzi Yuan*, Yanhao Chen, Yuhang Peng, Yuanrong Tang, Gengyuan Liu, Bokui Chen‡ and Jiangtao Gong‡.
*: Equal contribution. ‡: Corresponding authors.
Static Badge Static Badge Static Badge Static Badge

CoIRL-AD introduces a dual-policy framework that unifies imitation learning (IL) and reinforcement learning (RL) through a collaborative–competitive mechanism within a latent world model.
The framework enhances generalization and robustness in end-to-end autonomous driving without relying on external simulators.

main figure

Here we provide our model checkpoints (see /ckpts), info files (see /info-files) for dataloader to download and reproduce our experiment results.

For more details, please refer to our github repo.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support