In this post I’d like to begin to explore what is meant by the *continuum limit* of a quantum lattice system. This post is meant to serve as the first in a series of intuitive overviews of the ideas involved in the open science project “continuous-limits-of-quantum-lattice-systems” hosted on github.

The continuous limit is a power tool in the condensed-matter theorist’s toolkit: by identifying the appropriate *effective* field theory modelling the low-energy large-scale physics of a complex quantum system one can bring the fully developed apparatus of (perturbative) field theory and the renormalisation group to bear on a problem, often delivering results unavailable via any other means.

Now I’m pretty sure I’m not alone in feeling confused by much of the available physical literature on this topic. Over the past decade I’ve tried to understand the process whereby a field theory is produced to describe a given quantum lattice system. However, up until recently, this has always seemed like a kind of mysterious black magic to me. I know it has to do with symmetries etc. etc.. But this didn’t really help me! I had so many questions. E.g., how exactly does the state of the effective field theory relate to that of the original lattice system? And, for that matter, how do you know what quantities are “fieldlike” and which don’t admit a field-theoretic representation? That is, what has most puzzled me is the *quantitative* side of things: ideally what I would like is some kind of map which associates, one to one, lattice quantities with field quantities in an operationally transparent way.

Thus I was very excited when I discovered that there is indeed such a map and, further, is naturally associated with the *quantum de Finetti theorem*. Here I’d like to explain the idea behind this construction using the quantum information theoretic language of exchangeable states.

Firstly, we need to agree on what the input to, and output from, this procedure ought to be. Let’s initially set our goal to be to take a regular quantum lattice system with lattice spacing and to produce the *effective classical field data*, i.e., a list of classical fields and a state for the fields, which models the low-energy large-scale physics of the lattice. (We’ll get on to how to obtain a quantum field description in a later post.) In this context, as a quantum information theorist, it is natural to think about states rather than hamiltonians. So, in the very least, what we need for the construction is a state for a quantum lattice. To keep things simple lets assume we have an (infinite) one-dimensional lattice of qubits with lattice spacing .

Actually, to take a *limit*, we need more than just one state , we need a *sequence* of states , one for each possible value of the lattice spacing as . It is perfectly fine if the state is constant, e.g., the same product state for all values of . However, things are much more interesting if the states are allowed to depend on in some nontrivial way, e.g. they have a correlation length which grows proportional to , because in this case there ought to be correlations in the limit (pretty much irrespective of whatever your interpretation of “continuous limit” means).

Where does this family come from? Usually from the ground state of a hamiltonian which *explicitly* depends on . E.g., could simply be a field theory discretised on a lattice with spacing . But, and this is important, our sequence could also come from a tensor network ansatz not directly tied to a specific hamiltonian for a field theory.

Taking the continuum limit is now as simple as just letting go to zero. But the problem is that this sentence can be interpreted in multiple ways. This dilemma is intricately tied up with what we mean by *scale* for our system, and what is meant by *changing scale*. Most naively we could simply *declare* that the continuum limit is found when we set to zero. This is a perfectly fine answer, but not entirely in the spirit of how these things are *used*, i.e., as an *effective theory* appropriate only for the description of large-scale features. The issue is that simply setting doesn’t really change anything: without further restrictions we are still implicitly allowing *any* measurement of arbitrary (but still finite) numbers of qubits at *arbitrary* locations. That is, any observable which has finite support is allowed. Worse, we can even allow stringlike operators etc. etc.

The key point is that when we change scale, i.e., we “zoom out”, there is an implicit understanding that small-scale stuff is harder to resolve. For example, when you zoom out of an image with photoshop and display it on the *same* monitor as before with the same pixel density you *lose information* about the original pixel values.

Exactly how should we model this process of information loss? There is more than a little arbitrariness here. However, remarkably, it turns out that the final continuum limit doesn’t really depend on the details of this zooming out procedure. I won’t be able to prove this here (this observation is properly dealt with via the *renormalisation group*). Instead, I’ll take it for granted and move on to a specific model of zooming out for quantum spin systems which leads, in a natural way, to a bosonic classical field.

Suppose we want to zoom out by a factor of from a quantum lattice with spacing . To identify how information should be lost as we zoom out we appeal to a simple model for the measurement of observables of the lattice. Here we exploit a von Neumann-like prescription whereby we interrogate the lattice via interaction with some “pointer” particle (which is then subsequently subjected to a complete measurement). The picture in mind here is that of a pointer particle, e.g. a neutron, being scattered off the lattice into some detector. This experiment makes sense regardless of the lattice spacing, however, if the spacing is reduced we would need to work harder, by preparing a better collimated beam, to resolve the individual lattice sites. Another way of modelling this is to say that, if we don’t pay any more money, when we measure the finer lattice we must put up with the possibility that the interrogating particle beam spreads and only addresses lattice sites *with some probability* centred around the intended site.

This is equivalent to the description whereby the lattice is first subject to *random permutations* swapping neighbouring lattice sites, and then a *perfect* measurement via the scattering particle takes place. Now we come to the connection to exchangebility.

Suppose our beam addresses, *at best*, a physical region of length containing lattice sites. Suppose the initial lattice spacing is such that initially contains only one lattice site. If we zoom out by a factor of then our region now contains sites. However, our apparatus can still only resolve things on the lengthscale . Since our detector is only able to address the region all the measurement outcomes can be inferred from the reduced density operator . However, because the detector is assumed to have the same sensitivity before and after the zooming we model this by subjecting the state to *random permutations* of the spins in , i.e., we first apply a CP map to to produce

Now we employ the quantum de Finetti theorem to deduce that (after ignoring a couple of sites at the boundaries) our reduced density operator is very nearly a convex combination of *product* states:

This is a remarkable result: as we zoom out from a fixed state our detector will be unable to distinguish the state of a fixed physical region from a convex combination of *product states*. Further, we can *locally* specify the state the detector sees with a probability measure over the *state space* of a qubit. That is, the state the detector sees is determined by a possibly position-dependent probability distribution over three-dimension vectors (specifying the location on the bloch sphere). It is this fundamental observation that underlies the continuum limit construction.

So far we’ve discussed only how to zoom out from a fixed state. To obtain the full continuum limit construction we need only apply the zooming out prescription to the sequence . That is, the continuum limit is given by the following recipe: fix a lengthscale on which our observations take place. Then for any region the continuum limit is given by the probability distribution arising in (2) as applied to the increasing set of sites in as is decreased.

It turns out to be a bit awkward to pursue the continuum limit construction this way in the Schrödinger picture via the de Finetti theorem (although one can do this). Instead, we’ll move over to the Heisenberg picture and instead express our zooming out prescription in terms of the restricted observables we allow ourselves to measure on the sequence as is reduced.

Exploiting the physical justification above we imagine we are performing some kind of “neutron scattering experiment” to measure, for example, the -component of a spin at physical position . As we said, the impinging beam of particles, even if well collimated, will inevitably spread as it travels toward the spin chain. Thus the *observable* measured by this scattering experiment is, instead of , rather , where is the *beam shape* (for example, a gaussian centred at : ). Thus the beam addresses approximately spins. Even though the lattice spacing of the quantum lattice system isn’t precisely known, we declare that corresponds to the same experimental setup, i.e., the “measurement of the spin at location “, independent of the lattice spacing.

Motivated by this example we declare the family of observables given by

where is a *fixed* -dimensional vector-valued function, are the correct way to compare states of the lattice with different lattice spacing. That is, we say that and are *close* if all expectation values of these observables (and their products etc.) are close.

Note the crucial fact that the operators commute with each other in the limit : . Thus, in the limit , the observables may be jointly measured; the set of limiting observables forms a *commutative algebra* and therefore models an effective classical system.

We can now finally formally introduce the classical continuum limit: this is *defined* by the expectation values of for , as runs over all possible “beam shapes”:

provided this limit exists. We say that *admits a classical continuum limit* if these expectation values are finite for all rapidly decaying vector-valued functions . Note the relation .

To conclude this post we consider a simple example: suppose our system is a chain of qubits, and , where is some single-qubit state. That is, is a mean-field ansatz independent of lattice spacing. In this case it is relatively easy to show that and .