Papers
arxiv:2605.06734

Gated QKAN-FWP: Scalable Quantum-inspired Sequence Learning

Published on May 7
· Submitted by
Jiun-Cheng Jiang
on May 11
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Quantum-inspired fast-weight programming framework using single-qubit circuits achieves superior forecasting performance with reduced parameters compared to classical recurrent models while maintaining NISQ device compatibility.

AI-generated summary

Fast Weight Programmers (FWPs) encode temporal dependencies through dynamically updated parameters rather than recurrent hidden states. Quantum FWPs (QFWPs) extend this idea with variational quantum circuits (VQCs), but existing implementations rely on multi-qubit architectures that are difficult to scale on noisy intermediate-scale quantum (NISQ) devices and expensive to simulate classically. We propose gated QKAN-FWP, a fast-weight framework that integrates FWP with Quantum-inspired Kolmogorov-Arnold Network (QKAN) using single-qubit data re-uploading circuits as learnable nonlinear activation, known as DatA Re-Uploading ActivatioN (DARUAN). We further introduce a scalar-gated fast-weight update rule that stabilizes parameter evolution, supported by a theoretical analysis of its adaptive memory kernel, geometric boundedness, and parallelizable gradient paths. We evaluate the framework across time-series benchmarks, MiniGrid reinforcement learning, and highlight real-world solar cycle forecasting as our main practical result. In the long-horizon setting with 528-month input window and 132-month forecast horizon, our 12.5k-parameter model achieves lower scaled Mean Square Error (MSE), peak amplitude error, and peak timing error than a suite of classical recurrent baselines with up to 13x more parameters, including Long Short-Term Memory (LSTM) networks (25.9k-89.1k parameters), WaveNet-LSTM (167k), Vanilla recurrent neural network (11.5k), and a Modified Echo State Network (132k). To validate NISQ compatibility, we further deploy the trained fast programmer on IonQ and IBM Quantum processors, recovering forecasting accuracy within 0.1% relative MSE of the noiseless simulator at 1024 shots. These results position gated QKAN-FWP as a scalable, parameter-efficient, and NISQ-compatible approach to quantum-inspired sequence modeling.

Community

Paper submitter

Gated QKAN-FWP introduces a scalable quantum-inspired fast-weight programming framework for sequence learning. It combines HQKAN/DARUAN single-qubit data re-uploading activations with a scalar-gated fast-weight update, giving adaptive memory, bounded parameter evolution, and parallelizable sequence processing. The paper reports strong results on time-series benchmarks, MiniGrid reinforcement learning, and long-horizon solar cycle forecasting, where a 12.5k-parameter model outperforms larger recurrent baselines and is validated on IonQ and IBM quantum hardware within ~1e-3 relative MSE of simulation at 1024 shots.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.06734
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.06734 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.06734 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.06734 in a Space README.md to link it from this page.

Collections including this paper 1