Curator's Take
This article provides crucial theoretical foundations for validating quantum advantage claims in photonic systems, addressing a significant gap in our understanding of how to properly benchmark experiments like Google's recent boson sampling demonstrations. The researchers develop rigorous mathematical tools to compute linear cross-entropy benchmarking scores across different experimental regimes, including the challenging "saturated" case where photon numbers approach the number of optical modes. Particularly intriguing is their finding that Gaussian boson sampling may have fundamental limitations in achieving the anticoncentration property needed for quantum advantage claims, potentially reshaping how the field approaches photonic quantum computing validation. This work arrives at a critical time as multiple groups race to demonstrate photonic quantum advantage, providing the theoretical rigor needed to separate genuine breakthroughs from experimental artifacts.
— Mark Eatherly
Summary
Photonic architectures are one of the leading platforms for demonstrating quantum computational advantage, with Boson Sampling and Gaussian Boson Sampling as the primary schemes. Yet, we lack for these photonic primitives a systematic theoretical understanding of linear cross-entropy benchmarking (LXEB), which is a central tool for testing quantum advantage proposals. In this work, we develop a representation-theoretic framework for the classical computation of average LXEB scores and second moments of output probability distributions, covering a range of quantum advantage experiments based on scattering $n$-photon states through $m$-mode Haar-random interferometers. Our methods apply in any regime, including the saturated regime, where the (expected) number of photons is comparable to the number of optical modes. The same second-moment techniques also allow us to prove anticoncentration for traditional Fock-state Boson Sampling in the saturated regime. Interestingly, for Gaussian Boson Sampling second moments are not sufficient to establish a meaningful anticoncentration statement. The technical core of our approach rests on decomposing two copies of the $n$-particle bosonic space $\mathrm{Sym}^n(\mathbb{C}^m)$ into irreducible representations of $\mathrm{U}(m)$. This reduces two-copy Haar averages to computing purities of initial states after partial traces over particles, highlighting the role that particle entanglement plays for LXEB and anticoncentration.