If AI or a quantum computer solves problems humans can't, how do we know they're right?
09-23-2025

If AI or a quantum computer solves problems humans can't, how do we know they're right?

subscribe
facebooklinkedinxwhatsappbluesky

Quantum computers are built to tackle problems that push past what human minds, and even our best supercomputers, can manage. That power raises a simple but tough question: when a quantum device spits out an answer we cannot directly test, how do we know it is correct.

A team at Swinburne University of Technology has proposed a way to test certain photonic quantum computers without waiting centuries for a classical benchmark.

The idea is to compare statistics that are easy to compute and measure, then use them to flag errors and check whether the device is behaving as intended.

Validation from quantum tests

The work was led by Alexander Dellios at Swinburne University’s Centre for Quantum Science and Technology Theory.

His group focused on Gaussian boson sampling (GBS), a photonic approach that uses squeezed light to generate complex probability patterns.

In this setup, a network mixes many light modes and detectors record how many photons land in each output.

For photon number resolving detectors, the probabilities for specific detection patterns are calculated using a Hafnian (a Hafnian is a mathematical function used to calculate probabilities in certain quantum systems, especially in Gaussian boson sampling with photon-number-resolving detectors).

When experiments use threshold detectors that only tell if light arrived or not, the relevant matrix function changes.

Those outcomes connect to a Torontonian (a Torontonian is a mathematical function used to calculate probabilities in Gaussian boson sampling when threshold detectors are used, introduced by researchers in Toronto).

How the quantum test works

The Swinburne method does not attempt to compute every hard probability that defines the full distribution.

It tests lower dimensional summaries of the data, sometimes called grouped count probabilities, and compares them with theory across many bins in a single shot.

Under the hood, the simulation backbone is the positive-P representation. This phase space technique reproduces normally ordered moments exactly for any quantum state and scales well to many modes.

The team reports a computational speedup of about 10^18 over direct classical simulation in a 288 mode case. That scale-up matters because it turns a near hopeless calculation into a practical test that can run on a workstation.

“Some problems would take even the fastest supercomputer millions of years to solve. Our methods let us check in minutes on a laptop whether a GBS experiment is producing the right results and what errors may be present,” said Dellios.

What the team found

To demonstrate the approach, the team analyzed data from a recent high profile boson sampling experiment.

In 2022, the Borealis machine estimated that a top classical supercomputer would need more than 9,000 years to produce a single exact sample the device generated in 36 microseconds.

When the Swinburne group ran their quantum test, they found the measured probability distribution did not match the original target model.

The quantum data aligned better with a modified distribution that accounts for thermalization and measurement imperfections.

This outcome does not claim the photonic machine lacks computational advantage. It shows that the device was solving a slightly different statistical problem than the ideal one intended by the experimenters.

The next step is to determine whether sampling from that alternative distribution remains computationally hard. If it is still hard, the computational promise stands, but the target needs to be stated precisely.

Quantum computer tests matter

Verification and computational advantage are related but different.

Advantage compares runtimes between the best classical methods and the quantum device for a stated task, while validation asks whether the hardware is solving the stated task at all.

Scalable validation will help teams locate and correct error sources before they grow into systematic mismatches.

That workflow can steer photonic hardware toward settings where the output retains its nonclassical features and the stated task stays intact.

Positive results from these tests can also guide parameter tuning.

If a small change in squeezing or transmission corrects the statistics, engineers gain a direct knob to improve fidelity without weeks of blind trial and error.

As the number of modes and detected photons climbs, the full distribution becomes too sparse to estimate or compute.

That is where grouped statistics shine, since they remain estimable with limited samples and still carry the high order correlations that define nonclassical behavior.

Quantum testing and the future

The team plans to check whether the alternative model uncovered by their tests is still in a class that is hard to simulate classically.

That answer will clarify whether the photonic device preserved its quantum character throughout the run or slipped into a regime that classical algorithms can mimic.

“Developing large-scale, error-free quantum computers is a Herculean task that, if achieved, will revolutionize fields such as drug development, AI, cyber security, and allow us to deepen our understanding of the physical universe,” said Dellios.

The study is published in Quantum Science and Technology.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe