Here is a primer on one aspect of getting quantum computing technology to be reliable enough for mainstream usability: error correction

Much has been written and touted about the disruptive potential of quantum computing. However, one major barrier to reaching this potential is the technology’s high susceptibility to noise and calibration errors.  

The ability to manage or reduce the error rates in quantum technology will determine the pace at which the world reaches the capacity to begin leveraging a new era of computing power.

If we can understand the impact of errors and how well current techniques can compensate for them, we can gain insights into what stage of development the quantum computing industry has reached.

Traditional computing error correction
Conventional computing errors typically occur because one or more bits unexpectedly flip. To correct these bit flips and return the system to the expected state, error correction strategies have been developed.

Today, classical computing error correction is usually unnecessary and is used when a failure would be catastrophic and/or when the computer will be in an environment that is more likely to introduce errors, such as for space missions. In general, error correction consists of three steps:

    1. Encoding states into more bits
    2. Looking at the encoded state at a regular time interval
    3. Correcting the state based on the observation from step two

Leaving more time between corrections results in more chances for bit flip errors to occur, so any latency in the system is problematic when systems are error-prone. As a result, the biggest challenge for traditional computational error correction has been speed — finding more effective and efficient ways to detect errors before they cause significant problems. 

Sources of quantum error
In quantum computing, qubits can store the same binary states used in conventional computing, but quantum mechanical features — namely superposition and entanglement — also allow for additional states to be stored and manipulated.

A computing error, quantum or not, is any undesired operation that replaces the state of memory with another state. In conventional computers, an error on a single bit is limited to an accidental flip from 0 to 1, or from 1 to 0. Since additional states are featured beyond sequences of bits in quantum computing, errors can take many more forms. There are more quantum states than conventional bit sequences, leaving room for more types of undesired state alterations. 

Because qubits must leverage the effects of quantum mechanics, they are inherently small and very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely. Also, qubits are inherently very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely.

Sources of quantum computing errors include:

    • External forces: Even small vibrations or variations in magnetic forces, electric currents, or ambient temperature can cause quantum computations to return incorrect results or, in some types of quantum computers, to lose the state of memory entirely.
    • Internal control: Since qubits are extremely sensitive to small fluctuations, the precision of the signals used to act on the stored states for computations must be very high. Any deviation from a perfect signal will result in errors.

Tackling quantum errors
Quantum error correction procedures follow the encoding, measurement, and recovery procedures used for conventional computers. However, there are new challenges to applying these steps to quantum computers. 

In classical computing, we look at the encoded state to see what went wrong, in order to apply a correction. This is not possible with quantum computers.

One fundamental tenet of quantum mechanics is that looking at a quantum state changes it. This means that we cannot measure the encoded state directly without destroying the information that we are trying to preserve.

For this reason, quantum researchers have developed methods that allow us to retrieve information about the errors on the state without measuring the state directly. These methods involve indirect measurements, which do not give us information about which logical state we have, and ideally, will not impact the state. 

Given how fragile quantum states are to their environment, it is likely that large encodings will be needed. That is, hundreds if not thousands of qubits may be required to encode a single qubit state. As noted by Science.org, Google researchers believe it may be possible to sustain a qubit indefinitely by expanding error correction efforts across 1,000 qubits. 

Bottlenecks to overcome
Much like in classical computing where there was uncertainty around which an error has occurred when a state is measured, quantum computing measurement results tell us only that one error from a given set of possible errors happened: we do not know for sure which of these possible errors occurred. Finding the best method for choosing a correction is a difficult problem, and one where there is still ongoing work.  

If we know the noise acting on a system, we can calculate the best possible strategy for small codes. For larger codes, however, it becomes prohibitively expensive.

Take, for example, the surface code, which is the most popular large-quantum-error correction code. Rather than pre-selecting corrections for each measurement outcome and using a lookup table, a classical algorithm is used to select recovery operations during every error correction step. This algorithm introduces significant latency.  

Even for smaller codes using lookup tables, classical computers are still required to route measurement outcomes, select a recovery operation, and send that information back to the quantum computer. This again introduces significant latency, thereby making the codes less effective.

This is one major bottleneck to effective quantum error correction that many in the field are working actively to overcome. 


Arnaud Carignan-Dugas
Stefanie Beale