Suppose you want to make a prediction about some physical system. To do this you’ll need to somehow solve the defining equations, which for quantum systems means solving the Schrödinger equation. As far as I’m aware there are really only three general approaches one can take: (i) apply perturbation theory to relate your system to something you’ve already solved; (ii) use the variational principle; or (iii) use Monte Carlo sampling. (Of course, there is always another option, reserved for geniuses: exactly solve the system.)
Even though perturbation theory and the variational principle are elementary techniques — they feature in any decent introductory quantum mechanics text — they have seen increasingly sophisticated applications in recent years. Indeed, there can be no more powerful and well-developed application of perturbation theory in any area of science than that developed in quantum field theory.
The variational principle is also itself the basis of a tremendous number of highly successful calculational tools in physics. Examples range from the Rayleigh-Ritz method in atomic physics, to density functional theory in quantum chemistry, and the density matrix renormalisation group in condensed matter physics. So pervasive is this technique that, apart from perturbation theory, it has essentially become the only way to extract highly accurate predictions about quantum systems.
In this post I’d like to focus on recent applications of the variational principle in physics, in particular in condensed matter physics, and isolate what features these successful applications are exploiting. I will argue that the recently developed numerical renormalisation group algorithms to vary over the matrix product state (MPS) class (and relatives) constitute the most sophisticated implementations of the variational principle to date. Finally, I’ll argue that there is an abstract physical explanation for this success, which I’ll do by isolating a fundamental property possessed by MPS and relatives, namely, that they are parametrised holographically.
1. Matrix product states
Consider a many particle quantum system. Its hilbert space is big: even for a collection of distinguishable spin- particles its dimension scales exponentially as . Thus one might feel that any approach to solving physical systems based on wavefunctions is doomed to failure! After all, how can we possibly hope to parametrise such a space? However, we are slowly beginning to understand that the vastness of hilbert space is an illusion; there are now many theoretical results (see this for a recent review) showing that natural states of local physical systems are only slightly entangled, whereas a typical state in hilbert space is very strongly entangled. This implies that if one can only understand the entanglement present in these states then one could parametrise all of them efficiently. Such a parametrisation would then promise exceptional accuracy when combined with the variational principle.
The entanglement present in many naturally occurring states for quantum many body systems also has a very special form because it is concentrated at the border between any two regions. Thus these states obey an area law: the entropy (of entanglement) of some region of the system is proportional to the length of the boundary. So, then, to exploit this using the variational principle we’ll need some variational class exhibiting this property. Actually, we really need a lot more.
We now understand that there are several key desiderata that a good variational class ought to meet (these insights have been hard won in recent years after many mistakes, extensive numerical experimentation and tinkering, and theoretical advances): (i) the class ought to depend on at most a polynomial number (in the number of particles) of variational parameters. This is so the class can at least be stored in a computer’s memory; (ii) it ought to be easy (as in, involving only polynomially many arithmetic operations) to extract information about the quantum state restricted to any small region of the system. That is, one should be able to calculate the reduced density operator for any constant number of particles; (iii) the class ought to possess a refinement parameter which counts the number of variational parameters per particle. Further, as this parameter is increased the class ought to capture more and more quantum states; (iv) the class ought to be complete, meaning that as the class should be able to represent any quantum state. Any variational class satisfying these requirements can be the basis of, at least, a numerical implementation of the variational principle.
For one-dimensional quantum spin systems we now know at least two variational classes satisfying these requirements, namely, matrix product states (MPS) and the multiscale entanglement renormalisation ansatz (MERA). I’d like to focus on MPS for this post, but much of what I’ll say extends to MERA as well. The promise alluded to earlier, namely, that understanding the entanglement of naturally occurring states should show us how to parametrise them efficiently, has also been realised by MPS, which have the correct entanglement properties. Finally we now understand that the density matrix renormalisation group (DMRG) procedure of White (and relatives), can be understood as a variation over the MPS class, thus finally explaining its unprecedented accuracy.
Here I’d like to argue that there is a fundamental physical explanation for the success for MPS. This has got to do with the following interpretation: if we insist that every MPS is the result of some legal quantum dynamics (i.e., unitary processes) then we quickly arrive at a so-called \emph} picture. Here an arbitrary MPS
can be understood as being the result of a sequence , , of arbitrary unitary interactions of the initially disentangled spins of the lattice, in turn, with some fixed ancillary quantum spin of dimension (this is also the refinement parameter mentioned in the previous section). At the end of this process the auxiliary system is disentangled and discarded (so the quantum lattice ends up in a pure state). Conversely, any state prepared in such a way is an MPS. All of the quantum correlations in are strictly the result of correlations created by between the th spin and the common auxiliary system — the spins of the lattice never directly interact.
If we want to understand how the auxiliary system evolves throughout this process we can trace out the quantum spin system: we find that the state of the auxiliary system after the th interaction is given in terms of by
where is a completely positive map. We require that and . Thus the auxiliary system is undergoing non-equilibrium dissipative dynamics described by throughout the preparation process. We now think of the index as a (discrete) “time” label.
It is at this point I can explain the holographic property possessed by MPS: suppose you want to calculate some physical expectation value of the quantum lattice system, say the correlation function , where (respectively, ) is some operator acting nontrivially on site (respectively, ). You can do this in terms of the auxiliary system alone by first evolving the auxiliary system to time , so it’s in the state , making some (admittedly complicated) insertion of a specific operator , evolving the auxiliary system again according to to time , making another insertion of at , and then taking the trace. A similar recipe holds for any other physical observable. (This can be looked up in any paper on MPS.)
Thus, we can understand all of the physical properties of an MPS purely in terms of the nonequilibrium dynamics of a geometrically zero-dimensional “boundary” auxiliary system. Another way of saying this is that the “bulk” wavefunction described by an MPS always has a local parameterisation in terms of a boundary system. The boundary theory provides the most economical description of what the state is doing outside of some region.
I would argue that this holographic property is the key physical reason for the success of MPS as a variational class: natural occurring states are holographic.
It turns out that projected-entangled pair states (PEPS), which are a two-dimensional generalisation of MPS, also admit a holographic interpretation. The argument is more involved, and requires some new results on their sequential preparation that I’ll post at some later date. The result is the same, however: one can understand a two-dimensional PEPS as being parametrised by the nonequilibrium dynamics of a geometrically one-dimensional boundary theory.
3. Holographic quantum states and quantum field theory
In promoting the holographic property to the defining characteristic of a good variational class one is sorely tempted to apply it to quantum field theory. A first step in this direction has already been taken in the recent paper on continuous matrix product states. In a paper just submitted to the arxiv, Jens Eisert, Frank Verstraete, and myself explain how to construct a general variational class of quantum field states based on the holographic property. This class naturally extends the cMPS class to 2+1-dimensional theories. In the quantum field case many things are nicer: the boundary field undergoes genuine nonequilibrium dissipative dynamics and there is a compelling connection to the theory of continuous quantum measurement. I’ll write more about all of this in future posts.