The variational principle is the basis of a tremendous number of highly successful calculational tools in physics, which is surprising because it is simple enough to cover in an introductory quantum mechanics course. Remember how it works? Pick some wavefunction that, on physical grounds, you believe captures the essential physics of the ground state and, crucially, make sure the wavefunction depends on one or more free parameters. A classic example is the wavefunction

Now plug this wavefunction into the expression for the energy: , and vary until you reach a minimum value. There is a theorem which guarantees that the optimal value of the energy will always overestimate the correct value and that the closer gets to the true value the better the approximation the wavefunction is to actual ground-state wavefunction. That’s it.

Of course life is never simple, especially for interacting systems of many particles. The trouble is that it is very hard to come up with a variational wavefunction for which you can actually calculate *and* which bears some resemblence to the actual ground-state. You need to be extremely clever to design reasonable variational wavefunctions, and it often requires deep new insights into the physics of a system to come up with a good one. Some classic examples include the BCS wavefunction in superconductivity, the Gutzwiller wavefunction, and the Laughlin wave function for the fractional quantum Hall effect (ok the last one is a bit of a cheat, it doesn’t really have any free parameters).

The dominance of the variational principle does not extend, however, to quantum field theory where it has not met with the same success as in other branches of many-body physics. The reason for this was very clearly explained by Feynman in a lecture given at a conference in Wangarooge in Germany in 1987. In this article I want to revisit Feynman’s lecture in the light of recent developments in quantum information theory and condensed matter physics.

In his lecture Feynman described some of his unsuccessful attempts at getting the principle to work. More importantly, he shared the insight he gained during this process into why he believes the principle will *not* work. While the lecture strikes a rather negative tone, he does actually offer some suggestions as to what he believes would repair the principle. In reading the article I was struck by the prescience of his suggestions. But I’ll get to this later.

**2. The UV problem **

The first problem Feynman identifies is what might be called the *ultraviolet* problem. This terminology is born in linear quantum field theory where there is a clear separation of scales coming from the noninteracting modes, labelled by their wave 4-vector. This attitude is pervasive in QFT where one routinely speaks of high-frequency — or *ultraviolet* — degrees of freedom, and low-frequency — or *infrared* — degrees of freedom. However, in an interacting quantum field theory the mode picture breaks down: UV modes are coupled with IR modes and what emerges are collective degrees of freedom across momentum space. While for Yang-Mills theories one expects this coupling to be weak, because of asymptotic freedom, it still plays a key role in the physics of these theories leading, amongst other things, to confinement and the (conjectured) mass gap. To lowest order the ground-state energy can be thought of as an (infinite) sum over each mode, each contributing to the total.

The ultraviolet problem doesn’t really emerge for *non-relativistic* field theories where the kinetic energy operator is positive, because here the finite particle density provides a natural cutoff; for many non-relativistic quantum field theories the vacuum is a very empty boring place. Only when there is a finite density of particles can interesting physics take place, but the density is always finite. In contrast, in *relativistic* quantum field theory the kinetic energy operator is unbounded from below: in the absence of a UV cutoff, there are simply “more” (infinitely more) UV degrees of freedom than IR degrees. Now the vacuum state is a seething roiling mass of virtual particles as it has *infinite* density. For a fermionic theory we can appeal to intuitions also familiar in condensed-matter to understand this: the ground state of a free theory is a Dirac sea where all the negative-energy modes are filled. (Obviously the Dirac sea concept didn’t emerge from condensed matter theory, but it is certainly familiar there.) Interactions couple these modes leading to a situation where virtual particle-antiparticle pairs can form and annihilate.

If we now apply the variational principle to an interacting relativistic quantum field theory we are in trouble. The procedure will try to optimise the wavefunctional to satisfy the UV modes (of which there are an infinite number, each contributing to the ground-state energy) at all costs, and a simple way for it to do this is to essentially ignore the IR physics. But we are interested in the IR physics (this is what’s observable), so we fail. Another way to express this: if we imagine all these modes as little independent systems lined up in a half-infinite chain, then only a handful of them at the end of the chain is what we typically care about (these are the IR degrees of freedom), all the rest are usually unimportant (they are the UV degrees of freedom). But when we use some wavefunctional designed from a free quantum theory, such as a gaussian, we are essentially treating all of these modes collectively the in the same way: we are averaging out the response of a wavefunctional to changes in individual modes. But this average washes out the IR influences and then we are lost.

**3. The calculational problem **

The other problem that Feynman describes is much more basic: at the time of his lecture there simply weren’t that many variational quantum field wavefunctionals around. Indeed, essentially, the only well-understood class were the *gaussian* states. Most of the ways people thought of to extend the gaussian wavefunctional class seemed to require perturbation theory in order to calculate , and Feynman suggested that this would inevitably ruin application of the variational principle, which relies on high accuracy to navigate its way through energy-space. Obviously here Feynman was expressing a “lack of imagination”, and this criticism is not intrinsic to relativistic theories, applying equally to non-relativistic quantum field theories and strongly-interacting quantum spin systems.

**4. Holographic quantum states and relativistic quantum field theory **

Since 1988, starting with the precursor of the MPS class, the AKLT state, we have witnessed an explosion in the development of variational wavefunctions for strongly-interacting quantum systems. Many of these developments have been a result of a better understanding of how *quantum entanglement* behaves in these systems. These wavefunctions, going well beyond gaussian trial states, allow the accurate and efficient calculation of observable quantities. (I refer you to my previous post here on holographic quantum states for an (idiosyncratic) introduction to some of these variational classes.) Thus, in some sense, Feynman’s objection here can already be regarded as having been addressed, in the case where one of these classes are applied in conjunction with a lattice regulator.

In his closing remarks, Feynman speculated how best to overcome his second reservation and predicted that it should be possible to describe a global field state using a reduced set of local parameters, and he foresaw the role of the *density matrix* in such a description. It turns out that *matrix product states*, and hence, the *continuous matrix product* states (cMPS), exhibit precisely these anticipated features as they possess a key *holographic property*: they are parametrised by the (non-equilibrium) dynamics of an auxiliary system — or *boundary field theory* — of one lower geometric dimension. This description also naturally identifies the physics of the auxiliary system as a *compact description* of the properties of the quantum field state outside a given region. (One again I refer you to my previous post for further details and explanation.)

So it seems that quantum field states, such as cMPS, with a holographic property overcome Feynman’s “calculational objection”. However the UV problem remains, and cMPS are far from immune to this problem. But now something very interesting happens: although cMPS do indeed suffer from the UV problem, they do so in a particularly simple and transparent way that strongly suggests a solution. To explain this we need to describe the variational parameters of a cMPS: these are a collection of matrices, the first of which is written and the remaining of which are , where runs over the particle flavours.

Under a change of scale , , a cMPS is equivalent to another cMPS with and . The particle number per unit length will be multiplied by a factor and the kinetic energy per unit length will multiplied by a factor . However, in contrast to the non-relativistic case, the relativistic kinetic energy is not a positive definite operator and can acquire a negative expectation value. If is a cMPS for which this is the case the kinetic energy can be lowered by making a scale change with . If we allow then these quantities will diverge, as expected. This behaviour is the manifestation of Feynman’s second objection for the cMPS ansatz.

To understand this better it is worth describing the properties of a cMPS in momentum space. What one typically observes is that a cMPS has a momentum distribution which occupies a *band* of frequencies between 0 and some cutoff point , which is directly related to the norm of and . At this point the momentum occupation decays as . Under scale transformations the momentum distribution is stretched by a factor of . Thus the cutoff is changed to . This behaviour clearly profiles as the parameter that runs along the renormalisation group flow. For scale-invariant theories the low-frequency behaviour will be scale invariant.

However, a solution to the UV problem is now straightforward as we can prevent from running to infinity by imposing a constraint on the matrices and : constraining the *norm* of and prevents from running and regularises the resulting theory by introducing a scale to the system. It is worth emphasising that the cutoff imposed here is directly related to bounding the size of the matrices and : cMPS provide a cutoff if the norms of and are bounded, and any (fermionic) theory with a cutoff can be approximated by a cMPS with and matrices whose norm is finite.

When a cMPS is used to solve a theory via the variational procedure one obtains a state whose momentum occupation fills some band of energies. Intuitively what is happening is as follows. The variational procedure begins by filling the available levels at the high-frequency end. If is large enough it is able to fill all of them very accurately up to momentum zero. If is too small it will make compromises near momentum zero. Now, if there is a gap , then proportional to with should suffice to provide a good description. If there is no gap, i.e. the system is critical, there is always a region that is less well described. If, as is what happens in field theory without cutoff, can run to infinity, then there is also a region where the state is poorly described. The picture I have in mind is that the variational procedure starts filling the modes near at some rate, but keeps moving away and the procedure is chasing after it, always ignoring the relevant IR degrees of freedom in the process.

The momentum distribution profile of an optimal cMPS will usually depend on the scale set by the cutoff. Once we’ve found the optimal variational solution at this scale we can transform to a different regularisation scale via the transformation and : we thus obtain a new cMPS whose momentum distribution is identical to that of the original, except it is *stretched* by a factor of . This will still be the optimal solution for a scale invariant system, while a better solution will exist at this regularisation scale if the system breaks scale invariance.

These arguments are explored in considerably more detail in a joint paper with Jutho Haegeman, Ignacio Cirac, Henri Verschelde, Frank Verstraete, and myself that will appear on the arXiv presently. Here we use cMPS to variationally calculate the properties of some -dimensional quantum field theories. We first look at free Dirac fermions as a sanity check, and we find excellent agreement with the analytic solution with modest computational effort. Then we look at a nontrivial theory, the Gross-Neveu model, which exhibits many of the characteristic features of QCD, eg., asymptotic freedom and chiral symmetry breaking. Here we again find good results agreeing with known analytic approaches. Finally we study the Casimir effect for fermions, and we find very interesting qualitative agreement with analytics for the Casimir energy. (We believe that all these numerical results can be improved substantially.) Much remains to be done, but we believe that our paper demonstrates that the variational procedure can be successfully applied to study relativistic quantum field theory. The key is to exploit our much improved understanding of variational state classes for strongly correlated quantum systems.

Wissenschaftskolleg zu Berlin, 2010.