--- title: QP-RNN Interactive Demo emoji: 🎮 colorFrom: blue colorTo: green sdk: gradio sdk_version: 5.35.0 app_file: app.py pinned: false license: mit --- # QP-RNN: Quadratic Programming Recurrent Neural Network This is an interactive demo for the paper ["MPC-Inspired Reinforcement Learning for Verifiable Model-Free Control"](https://arxiv.org/abs/2312.05332) (L4DC 2024). ## What is QP-RNN? QP-RNN is a novel neural network architecture that combines: - 🎯 **Structure** of Model Predictive Control (MPC) - 🧠 **Learning** capabilities of Deep Reinforcement Learning - ✅ **Verifiable** properties (stability, constraint satisfaction) At each time step, QP-RNN solves a parameterized Quadratic Program: ``` min 0.5 * y'Py + q(x)'y s.t. -1 ≤ Hy + b(x) ≤ 1 ``` Where the parameters (P, H, q, b) are learned through RL instead of derived from a model. ## Demo Features This interactive demo lets you: - 🎮 Control a double integrator system with QP-RNN - 🔧 Adjust controller parameters in real-time - 📊 Visualize system response and phase portraits - 📈 See performance metrics and constraint satisfaction ## Key Advantages 1. **Interpretable**: QP structure provides clear understanding 2. **Verifiable**: Enables formal stability and safety analysis 3. **Efficient**: Fixed-iteration solver suitable for real-time control 4. **Robust**: Handles constraints and disturbances naturally ## Links - 📄 [Paper](https://arxiv.org/abs/2312.05332) - 💻 [GitHub Repository](https://github.com/yiwenlu66/learning-qp) - 🤖 [Full Training Code](https://github.com/yiwenlu66/learning-qp) ## Citation ```bibtex @InProceedings{lu2024mpc, title={MPC-Inspired Reinforcement Learning for Verifiable Model-Free Control}, author={Lu, Yiwen and Li, Zishuo and Zhou, Yihan and Li, Na and Mo, Yilin}, booktitle={Proceedings of the 6th Conference on Learning for Dynamics and Control}, year={2024} } ```