hardware algorithms machine_learning simulation

Soft-Quantum Algorithms

Curator's Take

This research introduces a clever workaround to one of quantum machine learning's biggest bottlenecks: the painfully slow training of variational quantum circuits on classical simulators. By training unitary matrices directly as "soft-unitaries" before converting them back to gate sequences, the researchers achieved a remarkable 30x speedup in training time while actually improving performance on classification tasks. The approach essentially borrows successful techniques from classical neural networks and adapts them to quantum computing's unique constraints, potentially making quantum machine learning experiments much more practical for researchers working with classical simulations. While this primarily benefits near-term research rather than solving hardware limitations, it could significantly accelerate the development and testing of quantum algorithms before they reach actual quantum devices.

— Mark Eatherly

Summary

Quantum operations on pure states can be fully represented by unitary matrices. Variational quantum circuits, also known as quantum neural networks, embed data and trainable parameters into gate-based operations and optimize the parameters via gradient descent. The high cost of training and low fidelity of current quantum devices, however, restricts much of quantum machine learning to classical simulation. For few-qubit problems with large datasets, training the matrix elements directly, as is done with weight matrices in classical neural networks, can be faster than decomposing data and parameters into gates. We propose a method that trains matrices directly while maintaining unitarity through a single regularization term added to the loss function. A second training step, circuit alignment, then recovers a gate-based architecture from the resulting soft-unitary. On a five-qubit supervised classification task with 1000 datapoints, this two-step process produces a trained variational circuit in under four minutes, compared to over two hours for direct circuit training, while achieving lower binary cross-entropy loss. In a second experiment, soft-unitaries are embedded in a hybrid quantum-classical network for a reinforcement learning cartpole task, where the hybrid agent outperforms a purely classical baseline of comparable size.