Quantum computers from a “newb” user perspective
No, I won't begin by reiterating the immense power of quantum devices, such as quantum computers. They have already been excessively hyped. In what follows I will focus on quantum computers as a prominent example, However our research can be applied to other devices like quantum sensors.

In today's world, anyone can freely access quantum computers on the cloud. With a basic understanding of quantum mechanics and some knowledge of quantum circuits, you can conduct your first experiment within a day. However, the initial excitement fades on the second day when attempting something more exciting than the simple "hello world" circuit from the previous day. It becomes apparent that the devices are severely impeded by noise, rendering them virtually useless. On the third day, you learn about detector errors and how to address them. While things improve, the device remains inadequate due to the excessive noise associated with gate operations. For most users, this is where their journey ends. However, in our group, we aim to equip users with tools that can mitigate noise to the extent that they regain trust in their quantum computers. To achieve this, we focus on two noise-related tasks: benchmarking and quantum error mitigation (QEM).

Benchmarking and Diagnostics
Measuring noise and other errors, including calibration errors, in quantum devices is not a trivial task. Why do we bother measuring it? By classifying noise into different error mechanisms, we can determine which strategies to employ for its removal. Quantifying noise is also crucial for monitoring the success of noise mitigation tools and evaluating computation quality. However, fully mapping the noise in a quantum computer can be as resource-intensive as executing the algorithm itself. Full process tomography, for example, is not scalable, meaning that as the number of qubits increases, the resources required for implementation become unmanageable. Full tomography is impractical even for a dozen qubits.

Alternative benchmarking techniques that efficiently provide a coarse-grained understanding of noise have been developed. We further emphasize the word "diagnostics" because we seek not only to capture the amount of noise but also to quantify the contribution of different error mechanisms. Additionally, while the term benchmarking can also refer to performance aspects unrelated to noise and errors, diagnostics specifically focuses on the quality of the output.

 

Coping with noise in quantum computers

Quantum Error Correction

Error correction is percieveed as the ultimate solution all noise problems in quantum devices. However, it requires gates with minimal noise and substantial hardware overhead since quantum error correction, similar to classical error correction, relies on redundancy. This means that many physical qubits collectively represent a single logical qubit with reduced noise. Error correction addresses noise in real-time, fixing errors before they accumulate. Currently, the hardware overhead is overwhelming, with hundreds to thousands of physical qubits needed to represent a single logical qubit. Despite some intriguing small-scale demonstrations, experimentalists are still far from achieving this on a larger scale.

Quantum Error Mitigation

Quantum error mitigation (QEM) has emerged as an alternative to quantum error correction and has gained popularity in recent years. QEM can be found in almost any paper involving measuring expectation values in quantum computers. QEM has also been considered in other contexts, such as metrology. While several approaches to QEM exist, they share the execution of multiple noisy circuits and the combination of different measurements in post-processing to mitigate noise. It is a powerful idea that has been demonstrated in many experiments. However, there are several key open questions. Most QEM methods do not require hardware overhead, but they incur a sampling overhead and/or increased runtime until the desired accuracy in the observable of interest is achieved. Conceptually, investing more time is manageable. For instance, instead of ten hours of computation, we might spend two weeks—a common waiting time in physical chemistry calculations. However, running long jobs on a quantum computer is challenging in practice due to time-varying noise, and not all QEM protocols can account for this.

Two paradigmatic QEM methods

Let's briefly describe two QEM methods to better understand the time limitations and required resources: zero noise extrapolation (ZNE) and probabilistic error cancellation (PEC). In ZNE, additional circuits have the same functionality as the original circuit, but certain techniques are employed to amplify the noise. The idea is elegant in its simplicity—by controlled noise amplification, it becomes possible to fit a function that describes how the expectation value changess. Once we have this function, setting the argument to "zero noise" allows us to estimate the noiseless value. One way to amplify the noise is by running the original circuit and its inverse multiple times sequentially.

PEC utilizes circuits generally unrelated to the original circuit. These circuits serve as a basis set for constructing the desired ideal unitary. However, to employ PEC, we need to know the ideal unitary, limiting this technique to a small set of gates. Moreover, the noisy circuits that compose the basis must be fully characterized using gate set process tomography, a highly non-scalable process. Consequently, PEC is typically applied to small circuits, often just a single gate. Although it is possible to concatenate PEC with different gates, the sampling overhead (or runtime) becomes exponential in the circuit's depth.

Interestingly, these two methods lie at opposite ends of a spectrum. ZNE, to a certain extent, is noise-agnostic—it doesn't require a detailed understanding of the noise, such as Markovianity. In contrast, PEC demands a precise mapping of the noise, specifically how it manifests in the circuit of interest. Even if we know that there is only single-qubit decoherence, we still need gate set tomography to comprehend how the noise modifies the evolution operator.

While some popular variants of ZNE (e.g., gate insertion) do not guarantee convergence to the correct mitigated value, PEC is considered "bias-free" in principle. However, this characteristic assumes precise knowledge of the noise model, which is rarely achievable in practice. Imperfections in gate-set tomography can introduce biases in the estimation of noise, incurring a bias in the estimation of the ideal expectation value. Furthermore, noise parameters typically vary over time, rendering the learned noise model potentially irrelevant after a certain period. In ZNE, there exists a method to overcome the sensitivity to time-varying noise parameters.

QEM in our group

Recently, have we introduced a new QEM method called KIK [1]. On the scale mentioned earlier that captures the amount of information a QEM method uses about the noise, our method is closer to ZNE but not entirely noise-agnostic. KIK is an adaptive method that employs a single easily measurable parameter to quantify the noise strength. The measurements data are combined with different weights based on the noise strength. While ZNE is generally not bias-free and may not converge to the correct result, our KIK method has performance lower bound that guarantees convergence to the correct result.

Working with real hardware poses significant challenges. Some issues are unrelated to the methods we develop, but we must consider them as they affect the final results. An example is detector errors, and there may be other effects present in different computers.

QEM continues to evolve rapidly, both theoretically and experimentally. One of the challenges is applying these methods to large circuits. IBM recently demonstrated this by utilizing their 127-qubit processor, surpassing the capabilities of cutting-edge numerical techniques [2]. While this achievement does not claim "quantum advantage," it provides evidence of the machine’s potential power. Despite this impressive progress, we believe that much more can be accomplished by applying our techniques and combining them with other QEM methods.

In our group, our research primarily focuses on theory, with numerical calculations employed only for validation and demonstration purposes (we typically limit our numerical simulations to five spins). Additionally, we conduct numerous experiments on the cloud, particularly those involving pulse-level control. Furthermore, we collaborate with several experimental groups.