I always wanted to be a string theorist.
However, my career took a different turn and I ended up a quantum information theorist. Nevertheless, my fascination with particle physics and quantum gravity has never lessened. I suppose this has had a direct impact on what I chose to work on, namely, complex quantum systems and entanglement. Luckily this was to prove a good choice: I got to be part of an excitating revolution in quantum many body theory where entanglement-inspired thinking led to the development of a dazzling array of new variational classes, namely, tensor network states.
Tensor network states have proved to be a very useful tool in trying to reason about the dynamics of many strongly interacting quantum systems because they provide a wonderfully parsimonious description of the degrees of freedom that are actually relevant for observable physics without needing to keep direct track of the exponentially diverging set of complex numbers required to specify a general quantum state in the hilbert space of N particles.
But my first love was always quantum field theory and string theory so I was delighted when, in late 2009, I realised that one could take a continuum limit of a matrix product state (a tensor network state that I’d spent a lot of time working with) to produce a nontrivial quantum field state. With great excitement I worked out the details of the construction and started writing it up for publication. However, as is often the case with a good idea, it occurs to many people at the same time — such ideas are somehow “in the air”. This was no exception: to my astonishment I discovered a paper submitted some two weeks prior entitled “Continuous matrix product states for quantum fields” which read, almost word for word, like my draft. This was doubly embarrassing because the paper had already appeared on the arXiv and I had somehow missed it and, further, it had been written by my good friends and collaborators Frank Verstraete and Ignacio Cirac!
Thus began a very fruitful time where we worked out many of the details of this new approach to quantum field theory: in the past few years myself and my collaborators have been enthusiastically extending cMPS to a variety of settings and along the way developing a continuous limit of the PEPS and MERA tensor network states.
My hope, all this time, has been that these new continuous tensor network states would immediately lead to progress on the major unsolved problems in quantum field theory, just as they had in quantum many body theory. With the arrogance of the ignorant I was convinced we’d solve the problem of “infinities” and usher in a new era of quantum information-inspired quantum field theory.
It turned out not to be so easy.
During the past five years I’ve thrown myself into the deep end of quantum field theory: I realised early on I’d have to go back to basics and actually really learn the subject properly. Along the way I realised that my original naive enthusiasm was somewhat misguided and misdirected. But I also discovered some truly fascinating things that I’m very excited about. I’d like to share some of these with you today; although my initial conception of quantum field theory was indeed wrongheaded, I discovered that quantum information theorists are really very well-suited to think about all sorts of interesting problems in quantum field theory.
Why quantum field theory is hard
To the quantum information theorist quantum field theory is hard for three reasons:
- The language is extremely unfamiliar, i.e., lagrangians instead of hamiltonians etc.;
- Many of the textbooks often give the impression that there are troubling conceptual problems (i.e., the problem of “infinities”); and
- Quantum information theoriests are not used to (and, indeed, are often suspicious of) continuous degrees of freedom.
Let me begin by stating unequivocally: there are no “troubling infinities” in QFT as practised by physicists today. There are only issues when one insists on interpreting QFT as a fundamental theory. One of Wilson’s great insights was to realise that the way around this problem was, as is the case with many revolutionary ideas, to simply change the question. The modern formulation of QFT is now as an effective theory which is only expected to give good predictions up to some threshold (the cutoff) beyond which the theory must be revised. As soon as you accept this interpretation all of the infinities (and the consequent conceptual and interpretational problems) immediately vanish.
It took me years to fully appreciate this point. I can save you the hassle and simply point you to a paper of Wallace which does, in my opinion, a rather good job of thoroughly explaining this in laymans terms (see, in particular, pp. 5-7).
The persistant misinformation that there are problems with QFT somehow never dies and still appears in various forms in many modern textbooks. It continues to baffle me that Wilson’s conception of QFT as an effective theory does not appear on page 1 of every QFT textbook!
What is this renormalisation thingy? The idea is beautifully simple: following Wilson we simply change the definition of a QFT to that of an effective theory with cutoff. Of course, we need to make sure that this cutoff doesn’t directly appear in any prediction for operationally meaningful quantities or, if it does, in such a way that its contribution vanishes as the cutoff is removed. The art of fiddling with the cutoff effective theory so that cutoff isn’t experimentally visible is called renormalisation. Wilson persuasively argued that we can always do this near a fixed point of the RG (more on this later).
Note that is totally ok if the cutoff influences quantities which are not operationally defined. I.e., if you can’t design an experiment to measure a certain quantity then it is absolutely fine if it depends on the cutoff in some horrible way. What sort of quantities are operationally well-motivated? Traditionally these are n-point dynamical correlation functions. Indeed, we usually focus on the two- and four-point dynamical correlation functions. (Unless we are talking about gauge theory, but that will be the subject of another post.)
What about the continuous degrees of freedom? Well, again, here things aren’t so bad. Quantum information theorists of a quantum-optical persuasion are very used to modes of light being described by canonical commutation relations. The key to “getting” perturbative QFT for a QI theorist is to realise that, because of the cutoffs involved in its definition, a QFT is actually defined by an infinite lattice of harmonic oscillators (usually in momentum space). To make this a little easier: a perfectly respectable cutoff to use is a spatial lattice; just define your effective QFT on a spatial lattice. The next step is to ensure that the cutoff, i.e., the lattice, doesn’t influence our predictions for operationally defined quantities. Once we’ve done that we’ve managed to remove the cutoff or, in other words, take the continuum limit.
What about lagrangians? Starting with classical lagrangians seems totally backward to a QI theorist: we know the world is, and always has been, quantum. So defining quantum theories in terms of classical lagrangians seems odd. The crucial point here is to realise that the classical lagrangian is not really the definition of a QFT, rather it is an object we aspire to. A helpful thing to keep in mind here is that whenever you see a lagrangian written with continuous degrees of freedom you should think of it as the compact description of the effective classical system we’d like to get from our cutoff QFT once we’ve: (a) removed the dependence on the cutoff; and (b) decoherence has destroyed the off-diagonal terms. A useful heuristic to accomplish this task is to discretise the “defining” classical system, quantise it, solve it (perturbatively), and perturbatively renormalise it to remove the influece of the cutoff on observable quantities (i.e., keep revising the original discretised quantum hamiltonian until the influence of the cutoff disappears), and then apply decoherence to obtain an effective classical model described by the desired classical lagrangian.
What about Feynman diagrams and so on? This is often a barrier for the quantum information theorist. I can help you here as well: perturbative QFT is always built around a gaussian state, namely the vacuum state of the noninteracting theory. Now quantum information theorists are no strangers to gaussian states, it’s just that we usually use the language of covariance matrices to talk about them. In this sense perturbative QFT is a gaussian state given by the vacuum of a quadratic hamiltonian which is then perturbed slightly by a non-quadratic term. The simplest thing you can then do is use time-dependent perturbation theory to study the dynamics of the resulting non-quadratic theory. Of course it is right there that you encounter an infinite series of terms. You then use the fact that the expectation value of any product of operators in a gaussian state (the unperturbed state) is given in terms of covariance matrix elements. Quantum field theorists traditionally compactly represent these covariance matrix expressions using Feynman diagrams. It takes a little while to translate between these two formulations but after a while it becomes automatic.
So, to summarise, to make sense of perturbative QFT you need to:
- begin by defining it as a quantum theory with a cutoff, e.g., a hamiltonian for a lattice of quantum oscillators;
- work out the unperturbed gaussian state defining the vacuum state; and
- use standard time-dependent perturbation theory to express all quantities in terms of the covariance matrix; then
- fiddle with the original lattice hamiltonian so that predictions for operationally meaningful quantities do not depend on the cutoff (in this case, the lattice spacing); and finally
- apply decoherence to check that the resulting classical theory matches the original lagrangian you wanted.
Notice I never once said “Lorentz invariant”. This was deliberate: quantum field theory is useful in all kinds of contexts, both relativistic and nonrelativistic. But, still, what does Lorentz invariance mean in the context of QFT? Well, you should think of Lorentz invariance simply as a symmetry that we demand from our system. This translates to an extra constraint we need to impose on our predictions for operationally well-defined quantities. Unfortunately it is often very hard to impose this constraint in combination with step 4.: you can see that these two processes could be at odds with each other, i.e., while trying to eliminate dependence on the cutoff you mess up the Lorentz symmetry constraint and vice versa. This is tricky and the simplest way out is often to define your cutoff theory in such a way that it manifestly satisfies the constraint of Lorentz invariance.
It is really difficult to overstate the incredible successes of perturbative QFT. Literally thousands of papers have been written on the topic and all the low-hanging fruit has been plucked long ago, as has the medium-hanging and, indeed, the high-hanging fruit. It is hard to imagine what open problems are left.
A tensor-network inspired programme to understand nonperturbative QFT
It is definitely true that perturbative QFT is a mature and well-understood theory. But what about nonperturbative QFT, i.e., QFT built around a nongaussian state? To quote Nima Arkani-Hamed: “there be monsters”.
Now it would be totally unfair to say we know nothing about nonperturbative QFT: firstly, lattice gauge theory has been amazingly successful in extracting nonperturbative predictions; just look at the figure 22 on p. 45 in this paper. This plot never fails to fill me with awe. Secondly, string-theory inspired methods have led to incredible analytical successes, particularly, exploiting the AdS/CFT correspondence.
But it is true to say that we don’t yet understand nonperturbative QFT in general. Today I will propose and argue that tensor-network methods actually offer a promising, and somewhat independent, approach to understanding nonperturbative QFT.
To actually do nonperturbative QFT we need to carry out the steps 1., 4., and 5. described above for perturbative QFT without taking recourse to steps 2. and/or 3.. One of the worst problems is that there is a dearth of nongaussian quantum states which lead to nontrivial QFTs, i.e., from which you can explicitly remove the dependence of the cutoff in operationally well-defined predictions.
What I will now argue is that tensor network states provide us with a vast playground of nontrivial quantum states which explicitly allow us to remove the cutoff dependence. The key is to follow Wilson and realise that “cutoff dependence” equates to “approach a second-order quantum phase transition”. Simply put: we need to find families of states which exhibit a diverging correlation length, i.e., they approach a second-order quantum phase transition, and to rescale the correlation functions as we approach the transition in just the right way so that the resulting limit is well defined (this is just Wilson’s RG as expressed in a “condensed matter language”). With the benefit of many years of experience we can now carry out this procedure in generality for many nontrivial families of tensor network states, including, MPS, PEPS, tree tensor network states, and MERA.
It turns out that it is relatively easy to design families of tensor network states with a diverging correlation length and whose n-point correlation functions are calculable. Indeed, the cMPS, cPEPS, and cMERA constructions I mentioned earlier provide us with certain specific examples. However, they are special cases of a more general procedure that I’ve been working on for several years. This procedure gives us a vast playing field of cutoff-independent nongaussian quantum states. Many challenges remain, most glaringly, the imposition of the constraint of Lorentz invariance, however, I feel that already the initial results are promising enough to warrant further investigation.
This procedure is detailed in the first of what I envisage will eventually be four papers on this topic. You can read more about these papers in the next post.