Case Study 1: Quantum Advantage: Has It Been Achieved?
Overview
The question of whether a quantum computer has genuinely outperformed every possible classical computer on a well-defined computational task is one of the most debated questions in modern physics and computer science. This case study traces the key claims, the rebuttals, the shifting definitions, and the deeper epistemological challenge of proving that a quantum device has done something classically impossible.
Part 1: Defining the Goal
What Quantum Advantage Would Mean
The concept of "quantum supremacy" — a term coined by John Preskill in a 2012 paper — refers to the point at which a quantum device performs a computation that no classical computer can replicate in a reasonable time. The significance is profound: it would constitute experimental proof that quantum mechanics provides computational resources beyond those available to any classical system.
A rigorous demonstration requires three components:
- A well-specified computational task with clearly defined inputs and outputs.
- A quantum device that completes the task.
- Strong evidence (ideally a proof, but at minimum a convincing argument) that the best classical algorithm for the same task requires dramatically more time.
The difficulty is concentrated in component 3. Proving classical lower bounds — showing that no classical algorithm can do better — is one of the hardest problems in theoretical computer science. We cannot even prove $P \neq NP$, the most fundamental conjecture about classical computational limits. The practical consequence is that every quantum advantage claim is benchmarked against the best known classical algorithm, not the best possible classical algorithm. This leaves a permanent epistemic gap.
The Terminology Debate
The term "quantum supremacy" has been criticized on two fronts. Some researchers object to the word "supremacy" on linguistic and political grounds, preferring "quantum advantage" or "quantum primacy." More substantively, the word "supremacy" implies a permanent, definitive achievement, when in fact every experimental claim to date has been followed by improvements in classical algorithms that narrow the gap. "Quantum advantage" — conditional, provisional, and subject to revision — may be the more accurate term. We use it throughout this case study.
Part 2: The Google Sycamore Experiment (2019)
The Experiment
In October 2019, a team at Google AI Quantum, led by John Martinis, published a paper in Nature reporting the first claim of quantum advantage. The experiment used a 53-qubit superconducting processor called Sycamore to perform random circuit sampling (RCS): execute a random quantum circuit consisting of alternating layers of random single-qubit gates and fixed two-qubit CZ gates, and sample from the output distribution.
The key results: - Sycamore completed one million samples from a 53-qubit, 20-cycle random circuit in 200 seconds. - The output distribution was verified using a cross-entropy benchmark (XEB), which measures the correlation between the sampled distribution and the ideal (noiseless) distribution. The measured XEB fidelity was $\sim 0.2\%$ — small but statistically significant. - Google estimated that simulating the same circuit on Summit (then the world's most powerful supercomputer) would require 10,000 years.
The Claim and the Response
Google concluded that Sycamore had achieved quantum supremacy — the first unambiguous demonstration that a quantum device had performed a computation beyond the reach of any classical system.
IBM responded within days, before the paper was even formally published. In a blog post and preprint, IBM's quantum team argued that by using all of Summit's disk storage (250 petabytes) in addition to its RAM, the simulation could be completed in 2.5 days, not 10,000 years. The key difference: Google's estimate assumed the simulation required storing the entire $2^{53}$-component statevector in RAM; IBM showed that a hybrid RAM-disk approach, using Schrodinger-Feynman simulation, could trade memory for time.
IBM's rebuttal did not claim the simulation would be easy — 2.5 days on the world's most powerful supercomputer is still a herculean computation. But it narrowed the claimed advantage from a factor of $\sim 10^{10}$ (200 seconds vs. 10,000 years) to a factor of $\sim 10^3$ (200 seconds vs. 2.5 days).
The Classical Counterattack
Over the following years, classical simulation methods improved dramatically:
- 2021: Pan and Zhang (Chinese Academy of Sciences) demonstrated tensor network contraction methods that could simulate the Sycamore circuits on a GPU cluster in approximately 15 hours.
- 2022: Further refinements reduced the estimated classical cost to a few hours on dedicated hardware.
- 2023-2024: Approximate tensor network methods narrowed the gap further, though exact simulation of the full-fidelity Sycamore circuits remained computationally expensive.
This pattern — quantum claim followed by classical improvement — illustrates a fundamental tension: the classical baseline is not fixed. Every quantum advantage claim incentivizes new research into classical algorithms.
Part 3: Google's Extended Experiments (2023-2024)
Raising the Bar
Google responded to the classical improvements by increasing the circuit size and depth. In 2023-2024, experiments with 67+ qubits and deeper circuits (more layers) significantly increased the estimated classical simulation cost, reaching $10^{12}$+ GPU-hours by some estimates. The quantum runtime remained minutes.
The strategy is clear: as classical algorithms improve, quantum hardware must grow faster. The question is whether the quantum side of the race has a permanent structural advantage (exponential scaling of Hilbert space dimension) or whether clever classical algorithms can keep up indefinitely.
Most complexity theorists believe the quantum side wins eventually — the evidence from computational complexity theory (the conjecture that BQP $\neq$ BPP) supports this view. But "eventually" may require larger, more fault-tolerant devices than currently exist.
Part 4: IBM's Quantum Utility (2023)
A Different Approach
In June 2023, IBM shifted the conversation from "quantum supremacy" (demonstrating advantage on an artificial problem) to "quantum utility" (producing useful results on a real problem). Using a 127-qubit Eagle processor, IBM simulated the dynamics of a kicked transverse-field Ising model — a genuine condensed matter physics problem.
The experiment used zero-noise extrapolation (ZNE), an error mitigation technique: run the circuit at multiple noise levels, observe how the result degrades, and extrapolate to the zero-noise limit. The extrapolated results agreed with known exact solutions in small, tractable cases and provided new predictions in regimes where classical methods struggled.
This was not quantum advantage in the strict sense — no proof of classical intractability was offered, and classical tensor network methods have since been applied to the same problem. But the experiment demonstrated that noisy quantum hardware, combined with classical error mitigation, could produce results of scientific interest today, without waiting for fault tolerance.
The Significance of "Utility"
The shift from "advantage" to "utility" reflects a maturing field. Rather than proving an abstract computational separation, the community increasingly asks: can quantum computers produce results that scientists and engineers actually want? This is a lower bar but a more practical one, and it may define the near-term value of quantum computing.
Part 5: The Boson Sampling Experiments
Photonic Quantum Advantage
Parallel to the circuit-based approaches, photonic platforms have pursued quantum advantage through boson sampling — the computational problem of sampling from the output distribution of identical photons passing through a linear optical network.
- 2020: The USTC team in China reported Gaussian boson sampling with 76 photons in 200 seconds (Jiuzhang processor), claiming the equivalent classical computation would take $10^{14}$ years.
- 2021: An improved experiment with 113 photons (Jiuzhang 2.0) widened the gap further.
Boson sampling has a clean theoretical foundation: it is believed to be classically hard under the assumption that the permanent of a random Gaussian matrix is hard to approximate (a plausible but unproven conjecture). However, boson sampling is not a universal quantum computation, and, like RCS, it has no known practical application.
Part 6: Assessment and Open Questions
The Scoreboard (2025)
| Experiment | Platform | Task | Claimed Advantage | Classical Response |
|---|---|---|---|---|
| Google Sycamore (2019) | Superconducting, 53 qubits | Random circuit sampling | 200s vs. 10,000 years | Reduced to hours-days by tensor networks |
| USTC Jiuzhang (2020) | Photonic, 76 photons | Gaussian boson sampling | 200s vs. $10^{14}$ years | Partially challenged; gap remains large |
| USTC Zuchongzhi (2021) | Superconducting, 66 qubits | Random circuit sampling | Minutes vs. years | Narrower gap than Sycamore |
| IBM Eagle (2023) | Superconducting, 127 qubits | Ising model simulation | Quantum utility (not strict advantage) | Classical methods applied, gap debated |
| Google (2024) | Superconducting, 67+ qubits | Extended RCS | Minutes vs. $10^{12}$ GPU-hours | Under investigation |
What Has Been Established
-
The physics works. Quantum computers produce output distributions consistent with quantum mechanics and inconsistent with simple classical models. They are not classical machines in disguise.
-
Classical simulation is harder than initially assumed. The exponential scaling of Hilbert space does impose real costs on classical simulation, as evidenced by the enormous classical resources needed to match even modest quantum devices.
-
But classical algorithms are better than initially assumed. Tensor network methods, in particular, have dramatically narrowed the gap between quantum and classical.
-
Useful quantum advantage has not been achieved. No quantum computer has solved a practical problem faster than the best classical methods, when accounting for total resources (hardware cost, energy, development time).
-
The path to fault tolerance will be decisive. Once error-corrected logical qubits become available, the quantum advantage for structured problems (Shor's algorithm, quantum simulation) will be unambiguous. The debate is about when, not if.
Open Questions
-
Is random circuit sampling genuinely classically hard, or will future classical algorithms eventually match quantum performance? This is an open problem in computational complexity.
-
Should "quantum advantage" require a useful application, or is demonstrating a computational separation (even on an artificial problem) sufficient?
-
At what point does the total cost of the quantum computation (hardware, energy, infrastructure) need to be competitive with the classical alternative? A superconducting quantum computer requires a $\sim$\$15 million dilution refrigerator and consumes $\sim 25$ kW; a GPU cluster can be rented by the hour.
-
Is the NISQ era a stepping stone to fault tolerance, or a dead end? Will the variational algorithms (VQE, QAOA) that dominate NISQ research deliver practical value, or will the field need to wait for full error correction?
Discussion Questions
-
Google described their 2019 result as a "Wright brothers moment" for quantum computing. Is this analogy apt? The Wright brothers' flight was 12 seconds and 120 feet — useless by any practical standard — but it proved powered flight was possible. Does the Sycamore experiment prove something analogous?
-
If classical simulation of a quantum circuit takes 15 hours on a GPU cluster while the quantum computer takes 200 seconds, is that quantum advantage? What if the GPU cluster costs \$100 and the quantum computer costs \$15 million?
-
The "quantum advantage treadmill" — the need to keep improving quantum hardware to stay ahead of classical algorithms — is sometimes seen as a problem. Could it also be a benefit? How does the competition between quantum and classical methods improve both?
-
Preskill has said that we are in the "NISQ era" — an era of noisy, intermediate-scale quantum devices that may or may not deliver practical value. What would it take to exit the NISQ era? What milestone would mark the beginning of the fault-tolerant era?
-
China has invested heavily in quantum computing (Jiuzhang, Zuchongzhi) while the US leads in commercial quantum computing (IBM, Google, Quantinuum). How does geopolitical competition shape the quantum advantage race? Is this competition productive or distorting?
Further Reading
- Arute, F. et al. (2019). "Quantum supremacy using a programmable superconducting processor." Nature, 574, 505-510.
- Pednault, E. et al. (2019). "Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits." arXiv:1910.09534.
- Pan, F. & Zhang, P. (2022). "Simulation of quantum circuits using the big-batch tensor network method." Physical Review Letters, 128, 030501.
- Zhong, H.-S. et al. (2020). "Quantum computational advantage using photons." Science, 370, 1460-1463.
- Kim, Y. et al. (2023). "Evidence for the utility of quantum computing before fault tolerance." Nature, 618, 500-505.
- Preskill, J. (2018). "Quantum computing in the NISQ era and beyond." Quantum, 2, 79.
- Aaronson, S. & Arkhipov, A. (2011). "The computational complexity of linear optics." Theory of Computing, 9, 143-252.