hardware algorithms error_correction sensing policy

Optimizing Logical Mappings for Quantum Low-Density Parity Check Codes

Curator's Take

This research tackles a critical bottleneck in fault-tolerant quantum computing: how to efficiently map logical operations onto quantum low-density parity check (QLDPC) codes like the Gross code, which promise dramatically lower hardware overhead than surface codes. The work reveals that existing mapping approaches designed for near-term quantum devices fall short for these advanced error correction schemes because they fail to account for the complex multi-qubit Pauli operations and hierarchical structure inherent to QLDPC codes. By developing a two-stage mapping pipeline that uses hypergraph partitioning followed by priority-based cluster assignment, the researchers demonstrate significant reductions in inter-module measurement errors—the dominant source of logical errors in these systems. This represents an important step toward making QLDPC codes practical for large-scale fault-tolerant quantum computers, potentially enabling quantum applications with far fewer physical qubits than previously thought necessary.

— Mark Eatherly

Summary

Early demonstrations of fault tolerant quantum systems have paved the way for logical-level compilation. For fault-tolerant applications to succeed, execution must finish with a low total program error rate (i.e., a low program failure rate). In this work, we study a promising candidate for future fault-tolerant architectures with low spatial overhead: the Gross code. Compilation for the Gross code entails compiling to Pauli Based Computation and then reducing the rotations and measurements to the Bicycle ISA. Depending on the configuration of modules and the placement of code modules on hardware, one can reduce the amount of resulting Bicycle instructions to produce a lower overall error rate. We find that NISQ-based, and existing FTQC mappers are insufficient for mapping logical qubits on Gross code architectures because 1. they do not account for the two-level nature of the logical qubit mapping problem, which separates into code modules with distinct measurements, and 2. they naively account only for length two interactions, whereas Pauli-Products are up to length $n$, where $n$ is the number of logical qubits in the circuit. For these reasons, we introduce a two-stage pipeline that first uses hypergraph partitioning to create in-module clusters, and then executes a priority-based algorithm to efficiently assign clusters onto hardware. We find that our mapping policy reduces the error contribution from inter-module measurements, the largest source of error in the Gross Code, by up to $\sim36\%$ in the best case, with an average reduction of $\sim13\%$. On average, we reduce the failure rates from inter-module measurements by $\sim22\%$ with localized factory availability, and by $\sim17\%$ on grid architectures, allowing hardware developers to be less constrained in developing scalable fault tolerant systems due to software driven reductions in program failure rates.