Curator's Take
This research exposes a critical security vulnerability in distributed quantum computing that could undermine the entire approach to scaling quantum algorithms. The authors reveal that adversaries can exploit shared entanglement between quantum processors to inject structured noise that maintains training signals while systematically steering quantum machine learning models toward wrong answers - a particularly insidious attack since the system appears to be learning normally. The introduction of "Kraus expressibility" provides quantum researchers with new theoretical tools to understand how noise affects quantum circuit capabilities beyond simple error rates. As the field moves toward networked quantum systems and quantum cloud computing, this work highlights the urgent need to develop quantum-native security protocols rather than simply assuming trusted connections between quantum devices.
— Mark Eatherly
Summary
Distributed quantum algorithms offer a promising pathway to scale variational quantum algorithms beyond the constraints of noisy intermediate-scale quantum hardware. However, existing approaches implicitly assume a trusted entanglement-sharing layer across quantum processors. We show that this assumption introduces a fundamental vulnerability: adversarial perturbations of shared entanglement induce structured gate-level noise that directly impacts quantum learning. We develop a framework that maps entanglement-level perturbations to gate-level noise via an explicit Kraus representation. To quantify their impact, we introduce Kraus expressibility, a metric that generalizes unitary expressibility to noisy quantum channels. We then establish a trade-off between Kraus expressibility and trainability of noisy quantum circuits through gradient variance analysis. Our analysis reveals that an adversary can manipulate Kraus expressibility to maintain sufficiently large cost gradients (avoiding barren plateaus) while systematically biasing optimization toward incorrect solutions. We validate these findings through numerical simulations, demonstrating adversarial degradation of expressibility and trainability.