## Hamiltonian complexity talk

On Tuesday I gave a talk at the IMA conference at the IMS. I’ve put the slides here (it is a huge file, alas, as I included lots of pictures…)

In this talk I took the opportunity to popularise a problem that I hope will provide a fruitful avenue for future research in the emergent field of hamiltonian complexity.

Before I describe this problem, I’d like to say a couple of words about hamiltonian complexity itself: this field, which gained considerable momentum last year thanks to a couple of tightly focussed workshops at Leiden and Santa Fe, is (roughly speaking) aimed at exploring the interplay between theoretical physics and computational complexity theory. The central question of hamiltonian complexity, I would argue, is: “how hard is it to simulate a physical system?” To actually answer this question quantitatively requires that we be rather careful about what we mean by “physical system”, “simulation”, and “hardness”.

For “physical system” it has been convenient for quantum information theorists to focus on quantum spin systems (i.e., a set of $n$ quantum spins arrayed in a low-dimensional lattice). Thus we say the size of such a system is $n$. (Note, however, that the dimension of the hilbert space of such a system scales at least as fast as $2^n$.)

For “simulation” we typically mean a classical computer carrying out some algorithm, although one could widen this definition. To count as a simulation the algorithm in question needs to be able to calculate a prediction for some physical observable.

For “hardness”, the field of computational complexity provides a now-standard framework to make this precise: one quantifies the temporal and spatial resources (measured in arithmetic operations and bits, respectively) required by a computation to carry out the desired task.

With these ingredients in hand we can now state the main problem of hamiltonian complexity (well, of theoretical physics, really):

The simulation problem (equilibrium).

Input: A hamiltonian $H$ for a system of size $n$; an observable $A$; a tolerance $\epsilon$; a temperature $T$.

Output: an approximation $\alpha$ to the expectation value $\langle A\rangle=\mbox{tr}(Ae^{-H/kT}/\mathcal{Z})$ which satisfies $|\alpha- \langle A \rangle| \le \epsilon$.

The simulation problem, as stated, is simply the task of making a prediction for some observable. Hamiltonian complexity theory adds another dimension to the simulation problem by asking how hard it is to actually calculate such a prediction. This is clearly an important question: after all, suppose it took $2^n$ fundamental arithmetic operations to make a prediction for some experimentally accessible observable $A$. This would be a disaster; even for 50 spins, one would need to perform $\approx2^{50}$ operations, which is clearly unfeasible. So we’d have to give up on making any predictions whatsoever for such systems!

Obviously physicists are very good at making predictions, so this nightmare scenario never really occurs in practice. Hamiltonian complexity is thus aimed at explaining why this is the case. This is far from trivial because it is now relatively straightforward to write down 1D quantum spin systems whose equilibrium properties really do require something like $2^n$ arithmetic operations to approximate to within fairly reasonable tolerances! (We heard a lot about the state of the art of these results on Tuesday.)

Hamiltonian complexity is broken up into many sub-fields aimed at understanding variations on the theme of simulation and complexity. There are many subtle and profound techniques that have been developed to construct semi-realistic systems whose physical properties are hard to approximate. And there is yet another set of approaches aimed at actually simulating real (and imaginary) physical systems.

My talk was aimed at describing some situations where we actually can make predictions about quantum spin systems. I described two different situations where physical properties can be efficiently (on a classical computer) predicted. Typically — and the results I described are no exception — a proof that a physical property can be efficiently simulated actually provides a provably efficient algorithm to calculate the prediction.

What I feel is the most interesting open question now is to understand better the interplay between the observables $A$ one wishes to simulate and the hamiltonian $H$ to be simulated. I think this adds a new layer of richness to the complexity-theoretic landscape for quantum spin systems: most work to date has focussed on the LOCAL HAMILTONIAN PROBLEM, which is a special case of the SIMULATION PROBLEM, above (by only demanding predictions for the observable $H$ itself). However, it is completely plausible that if you aren’t interested in the energy, but some other observable $A$, the LOCAL HAMILTONIAN PROBLEM may become easier or harder!

### 2 Responses to Hamiltonian complexity talk

1. Carlos says:

Nice and interesting talk. I am currently working in tensor networks, so there is a lot of overlap, regarding the questions, with this.

2. tobiasosborne says:

Dear Carlos,

I would love to hear about any progress you make on connections between tensor networks and, say, the simulation problem; please feel free to leave a comment if you make any progress!

Sincerely,
Tobias