hardware algorithms simulation research

Towards High Performance Quantum Computing (HPQ): Parallelisation of the Hamiltonian Auto Decomposition Optimisation Framework (HADOF)

Curator's Take

This research represents a crucial step toward making quantum optimization practically viable by cleverly sidestepping current hardware limitations through problem decomposition and parallel execution. The Hamiltonian Auto Decomposition Optimisation Framework (HADOF) breaks large combinatorial problems into smaller chunks that can run simultaneously across multiple quantum processors, achieving impressive 3-4x speedups on real IBM hardware while maintaining solution quality. What makes this particularly exciting is that it addresses both the qubit count bottleneck and the notorious queue times that plague current quantum cloud services, essentially creating a pathway to "High Performance Quantum" computing that mirrors classical supercomputing approaches. The successful demonstration on real-world genome assembly problems shows this isn't just theoretical progress but a practical bridge toward quantum advantage in optimization tasks that matter today.

— Mark Eatherly

Summary

Practical applicability of quantum optimisation on near term devices is constrained by limited qubit counts and hardware noise, which restricts the scalability of quantum optimisation algorithms for combinatorial problems. The simulation of large quantum circuits is also difficult and constrained by memory requirement. The Hamiltonian Auto Decomposition Optimisation Framework (HADOF) addresses this by decomposing large QUBOs into smaller subproblems that can be solved iteratively on quantum or classical backends. This allows the scalability of quantum QUBO algorithms beyond device limits, as well as their simulation on classical devices. In this research, we extend the evaluation of HADOF by benchmarking on real IBM QPUs across sequential, single-QPU parallel, and multi-QPU parallel execution modes, advancing toward High Performance Quantum (HPQ) computing for combinatorial optimisation problems. Experimental results on IBM quantum hardware demonstrate up to 3-4x reduction in wall clock time when utilising four QPUs compared to sequential execution baseline, while maintaining comparable solution quality. Notably, even single QPU execution benefits from parallelised job orchestration and execution, yielding up to 3x speedup. Simulated results predict over 5x speed-up in parallel execution mode. We further validate the practical applicability of the approach on real world genome assembly instances, showing that both sequential and parallel HADOF variants achieve competitive accuracy while significantly improving time to solution. These results highlight the importance of parallelism at both the algorithmic and system levels, positioning HADOF as a viable pathway toward scalable quantum optimisation.