Making LLMs Optimize Multi-Scenario CUDA Kernels Like Experts
Abstract
A general-purpose automated GPU kernel optimization system is introduced that extends beyond machine learning applications to include scientific computing, using a multi-agent approach with hardware awareness and achieving performance comparable to closed-source libraries.
Optimizing GPU kernels manually is a challenging and time-consuming task. With the rapid development of LLMs, automated GPU kernel optimization is gradually becoming a tangible reality. However, current LLM-driven automated optimization methods narrowly focus on machine learning applications, such as PyTorch operator optimization, while overlooking broader domains like sparse matrix operations in scientific computing. Extending to these broader applications brings new challenges for the benchmark and algorithm. Therefore, developing a general-purpose automated kernel optimization method becomes our primary focus. In this paper, we address the absence of systematic evaluation for multi-scenario settings by introducing MSKernelBench, which spans multiple scenarios, including fundamental algebraic operations, common LLM kernels, sparse matrix operators, and scientific computing routines, each supporting both FP32 and BF16 precision. Building on this benchmark, we introduce CUDAMaster, a multi-agent, hardware-aware system for kernel optimization that leverages profiling information and automatically constructs the full compilation and execution toolchain. Experimental results demonstrate that CUDAMaster achieves significant speedups across most operators, outperforming Astra by about 35%. In several cases, its performance matches or surpasses that of highly optimized, closed-source libraries such as cuBLAS. A demo showcasing the original and optimized code for each operator is available at https://hanyx2021.github.io/MSKernelBenchDemo/.
Community
An automated CUDA optimization algorithm can achieve optimization speeds similar to cuBLAS.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Automated Kernel Generation in the Era of LLMs (2026)
- A Two-Stage GPU Kernel Tuner Combining Semantic Refactoring and Search-Based Optimization (2026)
- KernelBlaster: Continual Cross-Task CUDA Optimization via Memory-Augmented In-Context Reinforcement Learning (2026)
- CUDABench: Benchmarking LLMs for Text-to-CUDA Generation (2026)
- K-Search: LLM Kernel Generation via Co-Evolving Intrinsic World Model (2026)
- AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units (2026)
- AscendCraft: Automatic Ascend NPU Kernel Generation via DSL-Guided Transcompilation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper