CoIRL-AD: Collaborative–Competitive Imitation–Reinforcement Learning in Latent World Models for Autonomous Driving
Xiaoji Zheng*,
Yangzi Yuan*,
Yanhao Chen,
Yuhang Peng,
Yuanrong Tang,
Gengyuan Liu,
Bokui Chen‡ and
Jiangtao Gong‡.
*: Equal contribution.
‡: Corresponding authors.
CoIRL-AD introduces a dual-policy framework that unifies imitation learning (IL) and reinforcement learning (RL) through a collaborative–competitive mechanism within a latent world model.
The framework enhances generalization and robustness in end-to-end autonomous driving without relying on external simulators.
Here we provide our model checkpoints (see /ckpts), info files (see /info-files) for dataloader to download and reproduce our experiment results.
For more details, please refer to our github repo.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
