hardware algorithms machine_learning

QNAS: A Neural Architecture Search Framework for Accurate and Efficient Quantum Neural Networks

Curator's Take

This article tackles one of the most pressing practical challenges in quantum machine learning: how to design quantum neural networks that actually work well on today's limited quantum hardware. QNAS represents a significant step forward by automatically optimizing quantum circuit architectures across three critical dimensions simultaneously—accuracy, computational efficiency, and the overhead costs of splitting large circuits across multiple smaller quantum devices. The framework's discovery that different data types benefit from different quantum encoding strategies (angle embeddings for images, amplitude embeddings for tabular data) provides valuable practical guidance for researchers building real-world quantum ML applications. By achieving 97% accuracy on MNIST with just an 8-qubit circuit, QNAS demonstrates that thoughtful automated design can extract impressive performance from near-term quantum devices without requiring the massive quantum computers of the future.

— Mark Eatherly

Summary

Designing quantum neural networks (QNNs) that are both accurate and deployable on NISQ hardware is challenging. Handcrafted ansatze must balance expressivity, trainability, and resource use, while limited qubits often necessitate circuit cutting. Existing quantum architecture search methods primarily optimize accuracy while only heuristically controlling quantum and mostly ignore the exponential overhead of circuit cutting. We introduce QNAS, a neural architecture search framework that unifies hardware aware evaluation, multi objective optimization, and cutting overhead awareness for hybrid quantum classical neural networks (HQNNs). QNAS trains a shared parameter SuperCircuit and uses NSGA-II to optimize three objectives jointly: (i) validation error, (ii) a runtime cost proxy measuring wall clock evaluation time, and (iii) the estimated number of subcircuits under a target qubit budget. QNAS evaluates candidate HQNNs under a few epochs of training and discovers clear Pareto fronts that reveal tradeoffs between accuracy, efficiency, and cutting overhead. Across MNIST, Fashion-MNIST, and Iris benchmarks, we observe that embedding type and CNOT mode selection significantly impact both accuracy and efficiency, with angle-y embedding and sparse entangling patterns outperforming other configurations on image datasets, and amplitude embedding excelling on tabular data (Iris). On MNIST, the best architecture achieves 97.16% test accuracy with a compact 8 qubit, 2 layer circuit; on the more challenging Fashion-MNIST, 87.38% with a 5 qubit, 2 layer circuit; and on Iris, 100% validation accuracy with a 4 qubit, 2 layer circuit. QNAS surfaces these design insights automatically during search, guiding practitioners toward architectures that balance accuracy, resource efficiency, and practical deployability on current hardware.