Deriving

October 27, 2010

At the moment I am doing some work which involves many lengthy derivations. Typically these derivations involve multiple integrals of matrix functions. (If you’ve ever had to do perturbation theory in quantum field theory you will have done similar calculations.) The problem is that if I make even a tiny mistake, e.g., a minus-sign error, then this snowballs into a big BIG mess. And the final answer needs to be correct because it’s to be used in a numerical procedure; the method goes crazy otherwise! (These calculations are for an application of the variational principle over cMPS to some example quantum field theories; you can look at a summarised example here where I’ve written out all the equations required to reproduce the numerics of this paper.)

I’ve given up on writing these calculations on paper because I keep having to tear it up and start again. (This is very frustrating if you’ve just written 10 pages of derivations and then you find a sign error on page 2; how do you propagate the corrections through the following 8 pages without ending up with an unreadable mess?)

So I’ve adopted an “algorithm” which appears to be fairly robust: I write all the derivations in latex. Then I copy and paste the current equation and make one, and only one, substitution/evaluation/rearrangement, and then repeat. This procedure quickly produces very long latex files. But I’ve noticed that when I make a mistake it is fairly easy to propagate the correction through the remainder of the notes. Also, I seem to make fewer mistakes. It may seem like a lot of work, but I think it’s better than tearing up paper!

What I dream of now is some kind of “automated derivation error correction” for my procedure. I don’t mean anything sophisticated, just some protocol where a derivational error can be detected and corrected many steps later without having to correct all the intervening lines. Perhaps some kind repetition encoding…? Of course one would need the discipline to actually apply it… 😉


The variational principle, relativistic quantum field theory, and holographic quantum states

June 13, 2010

The variational principle is the basis of a tremendous number of highly successful calculational tools in physics, which is surprising because it is simple enough to cover in an introductory quantum mechanics course. Remember how it works? Pick some wavefunction that, on physical grounds, you believe captures the essential physics of the ground state and, crucially, make sure the wavefunction depends on one or more free parameters. A classic example is the wavefunction

\displaystyle  		\psi_a(x) \propto e^{-ax^2}. 	\ \ \ \ \ (1)

Now plug this wavefunction into the expression for the energy: {E(a) = \langle\psi_a|H|\psi_a\rangle/\langle\psi_a|\psi_a\rangle}, and vary {a} until you reach a minimum value. There is a theorem which guarantees that the optimal value of the energy will always overestimate the correct value and that the closer {E(a)} gets to the true value the better the approximation the wavefunction {\psi_a(x)} is to actual ground-state wavefunction. That’s it.

Of course life is never simple, especially for interacting systems of many particles. The trouble is that it is very hard to come up with a variational wavefunction for which you can actually calculate {E(a)} and which bears some resemblence to the actual ground-state. You need to be extremely clever to design reasonable variational wavefunctions, and it often requires deep new insights into the physics of a system to come up with a good one. Some classic examples include the BCS wavefunction in superconductivity, the Gutzwiller wavefunction, and the Laughlin wave function for the fractional quantum Hall effect (ok the last one is a bit of a cheat, it doesn’t really have any free parameters).

The dominance of the variational principle does not extend, however, to quantum field theory where it has not met with the same success as in other branches of many-body physics. The reason for this was very clearly explained by Feynman in a lecture given at a conference in Wangarooge in Germany in 1987. In this article I want to revisit Feynman’s lecture in the light of recent developments in quantum information theory and condensed matter physics.

In his lecture Feynman described some of his unsuccessful attempts at getting the principle to work. More importantly, he shared the insight he gained during this process into why he believes the principle will not work. While the lecture strikes a rather negative tone, he does actually offer some suggestions as to what he believes would repair the principle. In reading the article I was struck by the prescience of his suggestions. But I’ll get to this later. Read the rest of this entry »


Holographic quantum states

May 7, 2010

Suppose you want to make a prediction about some physical system. To do this you’ll need to somehow solve the defining equations, which for quantum systems means solving the Schrödinger equation. As far as I’m aware there are really only three general approaches one can take: (i) apply perturbation theory to relate your system to something you’ve already solved; (ii) use the variational principle; or (iii) use Monte Carlo sampling. (Of course, there is always another option, reserved for geniuses: exactly solve the system.)

Even though perturbation theory and the variational principle are elementary techniques — they feature in any decent introductory quantum mechanics text — they have seen increasingly sophisticated applications in recent years. Indeed, there can be no more powerful and well-developed application of perturbation theory in any area of science than that developed in quantum field theory.

The variational principle is also itself the basis of a tremendous number of highly successful calculational tools in physics. Examples range from the Rayleigh-Ritz method in atomic physics, to density functional theory in quantum chemistry, and the density matrix renormalisation group in condensed matter physics. So pervasive is this technique that, apart from perturbation theory, it has essentially become the only way to extract highly accurate predictions about quantum systems.

In this post I’d like to focus on recent applications of the variational principle in physics, in particular in condensed matter physics, and isolate what features these successful applications are exploiting. I will argue that the recently developed numerical renormalisation group algorithms to vary over the matrix product state (MPS) class (and relatives) constitute the most sophisticated implementations of the variational principle to date. Finally, I’ll argue that there is an abstract physical explanation for this success, which I’ll do by isolating a fundamental property possessed by MPS and relatives, namely, that they are parametrised holographically.

Read the rest of this entry »