Picture

Hi, I'm Chris Kang.

I'm Chris, a CS Major at the University of Washington. I'm fascinated by quantum computing, macroeconomics, and any field that unites different perspectives with mathematics to create impact.

Home \\ Blog

Q# Advent Calendar: What role does randomization have in Hamiltonian Simulation?

Christopher Kang, CS @ UW + Technical Intern @ PNNL

Foreword

In my exceedingly short career in quantum, Hamiltonian simulation has always been center of mind: for me, there is something so intrinsically cool about exposing the secret lives of molecules and understanding their dynamics.

Molecular simulations are well known as a major application area for quantum computing, but performing large- or even small-scale computations has been challenging on NISQ devices, namely because of noise. Even though we have new algorithms which attempt to overcome these challenges, like the Variational Quantum Eigensolver (Peruzzo et al., A variational eigenvalue solver on a photonic quantum processor), these approaches only seem promising for small systems and still require thousands or millions of shots per Hamiltonian in order to achieve accurate results.

We need techniques that bridge the extremely low-requirement but inaccurate/costly VQE and extremely high-requirement but accurate phase-estimation/quantum walk approaches for simulation. One potential method is leveraging stochasticity: namely, creating randomized Hamiltonian simulation algorithms. This page aims to be an extremely brief introduction to the context of Hamiltonian simulation, why randomization is necessary, and some early work.

Finally, I want to provide a disclaimer: I am still an undergrad and am (not yet!) an expert in this area. If you spot any mistakes or places where more context would be beneficial, I wholeheartedly welcome it – feel free to email me.

Happy holidays!

Hamiltonian Simulation Overview

Objective

A brief overview of Hamiltonian simulation is provided below. If you’d like to learn more, take a look at the overview I wrote for last year’s advent calendar.

The objective of Hamiltonian simulation is typically to understand the behavior of some Hamiltonian matrix $H$ given some arbitrary timestep $t \ll 1$. $H$ takes the following form:

\[H = \sum_{j = 1}^m h_j H_j\]

Where $H_j$ are sub-Hamiltonians with coefficient $h_j \in \mathbb{R}$ and $H_j$ normalized. Assuming $H$ is Hermitian, $iHt$ is anti-Hermitian and $e^{-iHt}$ is unitary, and thus can be implemented on a quantum computer. Additionally, $H_j$ are usually chosen in ways such that $e^{i h_j H_j t}$ are implementable.

Unfortunately, $e^{-iHt}$ does not usually have a directly implementable form, even though it is unitary. We need a technique to connect the implementable $e^{i h_j H_j t}$ terms with the desired operator $e^{iHt}$. Thus, we use matrix product formulas (like Trotter-Suzuki) to approximate $e^{-iHt}$ via $e^{i h_j H_j t}$ terms. For example, a first-order symmetric decomposition is provided below (Berry et al., Efficient quantum algorithms for simulating sparse Hamiltonians):

\[e^{-iHt} \approx \prod_{j = 1}^m e^{-i h_j H_j t /2} \prod_{j' = m}^1 e^{-i h_{j'} H_{j'} t / 2}\]

The cited paper also describes how Suzuki generalized this formula to $k$th order formulas, each with progressively better error scaling in terms of $t$ (albeit with increasing number of applications of $e^{ih_j H_j t}$).

Challenge

While we can derive theoretical bounds on the above approach, we fail to account for physical limitations, especially on NISQ devices. In particular, increasing the size of the Hamiltonian $m$ increases the depth of the required circuit (in the above case, the scaling is linear). This means that simulating larger Hamiltonians necessitates not only more qubits, but also better error resilience.

Furthermore, we haven’t discussed the role of qubit connectivity in circuit implementation – namely, the strategies used to simulate $e^{-iH_j t}$ often involve long CNOT chains, which hampers depth reductions (Table A1, Whitfield et al., Simulation of Electronic Structure Hamiltonians Using Quantum Computers)

Thus, simulating larger Hamiltonians on NISQ devices is severely hampered by these depth requirements. We need strategies to reduce the depth of circuits.

Randomization

A key idea in randomized product formulas is that we can segment the operation $e^{iHt}$ randomly into a series of operators with fixed depth; then, by sampling from these operators, we can infer properties from $e^{iHt}$.

In the following two sections, I give a brief, ‘spiritual’ overview of two papers: Earl Campbell’s A random compiler for fast Hamiltonian simulation and Faehrmann et al.’s Randomizing multi-product formulas for improved Hamiltonian simulation.

Campbell’s Compiler

The following overview is a significantly abbreviated summary of the review paper I wrote for graduate quantum. The review I wrote is linked here.

The key idea of Campbell’s compiler QDRIFT is that, instead of using all $m$ of the $e^{i h_j H_j t}$ operators when simulating $e^{iHt}$, select a fixed number $N$ of operators from a distribution proportional to $h_j$. I.e., we want to select operators with larger coefficients more frequently. Then, only implement the $N$ operators on the device, but run this stochastic circuit multiple times and average the results.

To validate this approach, Campbell performs a density matrix analysis, where he obtains bounds on the difference between the stochastically compiled density matrix and the original, $e^{-iHt}$ matrix. In particular, when comparing randomized compilation and Trotter, QDRIFT has a superior dependence on the number of terms $m$ at the cost of a higher $t$ scaling. This is typically acceptable because $t \ll 1$.

This is great news from an implementation perspective! We have obtained a circuit that will have a fixed depth (at most $N$) terms, and error scaling which is asymptotically comparable to our Trotter approach.

Faehrmann et al.’s Multi-product Formulas

Faehrmann et al. worked to combine the idea of stochastic Hamiltonian simulation with a different type of product formula: the multi-product formulas. These formulas make use of Linear Combinations of Unitaries (LCU), a technique introduced by Childs and Wiebe (Hamiltonian Simulation Using Linear Combinations of Unitary Operations). The key idea of LCU is that, given some unitaries $U, V$, we can actually also implement $U + V$ with some probability which can be boosted with other quantum techniques. Thus, this allows for a new type of product formula taking the following form (Eq 12, Childs and Wiebe):

\[M_{k, \chi}(t) = \sum_{q = 1}^{k + 1} C_q S_{\chi}(t / \ell_q)^\ell_q\]

$S_\chi$ represents our existing single-product formula, while $M_{k, \chi}$ is our multi-product formula. I.e., we can create a new representation of $e^{iHt}$ that is not the product of $e^{ih_j H_jt}$, but rather the sum of products of $e^{i h_j H_jt}$. This allows us to achieve superior bounds, because adding product formulas often will more directly cancel matrix error terms.

Faehrmann et al. then performed similar analyses to Campbell and analyzed the error scaling of the formulas, culminating in Table I and Figure 4 of the paper. Figure 4 essentially demonstrates that, in low timestep regimes, the multi-product formulas produce superior error bounds over existing methods like Trotter-Suzuki or LCU alone. Furthermore, they also show that these multi-product formulas can be randomized (and thus have a fixed depth) with high probability and comparable error scaling, at the cost of an increased number of trials (Theorem 2).

As an aside: I admit that I do not fully understand the implications of randomization in the multi-product setting. As seen in the article, Theorem 2 describes the impacts of randomization and can be combined with the multi-product formulas to obtain a new bound, but this is not heavily discussed in the paper.

Caveats to Randomization

While I’ve touted some of the benefits randomization provides, there are some caveats. For both formulas, these analyses are largely asymptotic, meaning that huge constant factors may be hidden. This is significant because we are prioritizing medium-scale performance, not the accuracy as $m \to \infty$. This necessitates further numerical analyses and real-world tests.

Additionally, these approaches still ignore connectivity challenges of NISQ devices. Further device-level compilation techniques will be required, as the above algorithms assume that $e^{i h_j H_jt}$ can be cheaply implemented; additionally, LCU assumes that operations can be easily made controllable. Neither of these assumptions are necessarily valid and often struggle when qubits only have 3-4 available neighbors.

Last, these approaches may only become relevant in the ‘mature’ NISQ domain, i.e. when devices are still noisy but with much greater volume and connectivity than at present. Even though the depth of circuits can be fixed, it may still be challenging to accurately run $N = 20$ or even $N = 10$ on many devices. Bridging the gap between immature and mature NISQ devices may require new techniques or a heavy reliance on existing approaches like VQE.

Concluding Thoughts

From a philosophical perspective, there are two key morals of the above work:

First, implementing algorithms on NISQ devices requires a constraint-first approach: the noisiness and limited depth of current devices requires us to rethink abstractions and focus on overcoming hardware constraints. As seen above, co-designing algorithms with hardware is critical on NISQ devices because of the significance of decoherence, limited number of qubits, and limited connectivity. Stochasticity directly attacks decoherence by limiting circuit depth, with a tradeoff of increased number of shots.

This leads to the second point: there is a continued need for exploration into the interplay between quantum and classical devices. In particular, quantum algorithm design often focuses on problems that quantum computers alone can solve. However, in the near-term, we should expand our viewpoint to consider adaptive algorithms where a classical `driver’ actively utilizes results produced from the quantum algorithm to inform parameter selection. In the above example, the results are often simply averaged together. Would there be benefits to pursuing an adaptive or bandit-inspired approach, where a classical driver actively tries to ascertain the impacts of underlying commutator structures between $h_j H_j$? While nascent, I wonder how we can better approach hybrid classical-quantum algorithms and leverage them in hardware-constrained domains.

Thanks and Resources

I’m so grateful that Mariia put out an open call to participate! It was a lot of fun to continue this tradition. All papers referenced are linked, though I admit that I am still an adolescent scientist and am working hard to learn more. Again, don’t hesitate to reach out!