Abstract
Lightweight reasoning in small language models is enabled through LoRA adapters, budget forcing via reinforcement learning, parallel test-time scaling, and dynamic adapter switching under strict resource constraints.
Large language models (LLMs) with chain-of-thought reasoning achieve state-of-the-art performance across complex problem-solving tasks, but their verbose reasoning traces and large context requirements make them impractical for edge deployment. These challenges include high token generation costs, large KV-cache footprints, and inefficiencies when distilling reasoning capabilities into smaller models for mobile devices. Existing approaches often rely on distilling reasoning traces from larger models into smaller models, which are verbose and stylistically redundant, undesirable for on-device inference. In this work, we propose a lightweight approach to enable reasoning in small LLMs using LoRA adapters combined with supervised fine-tuning. We further introduce budget forcing via reinforcement learning on these adapters, significantly reducing response length with minimal accuracy loss. To address memory-bound decoding, we exploit parallel test-time scaling, improving accuracy at minor latency increase. Finally, we present a dynamic adapter-switching mechanism that activates reasoning only when needed and a KV-cache sharing strategy during prompt encoding, reducing time-to-first-token for on-device inference. Experiments on Qwen2.5-7B demonstrate that our method achieves efficient, accurate reasoning under strict resource constraints, making LLM reasoning practical for mobile scenarios. Videos demonstrating our solution running on mobile devices are available on our project page.
Community
Proposes LoRA-based fine-tuning with supervised learning, plus budgeted reinforcement learning, dynamic adapter switching, and KV-cache sharing to enable efficient, accurate reasoning on small LLMs for on-device edge inference.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Training Large Reasoning Models Efficiently via Progressive Thought Encoding (2026)
- PACE: Prefix-Protected and Difficulty-Aware Compression for Efficient Reasoning (2026)
- Towards Efficient Large Language Reasoning Models via Extreme-Ratio Chain-of-Thought Compression (2026)
- ReasonCACHE: Teaching LLMs To Reason Without Weight Updates (2026)
- Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning (2026)
- ConPress: Learning Efficient Reasoning from Multi-Question Contextual Pressure (2026)
- Shorter Thoughts, Same Answers: Difficulty-Scaled Segment-Wise RL for CoT Compression (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper