Information propagation through disordered quantum systems

In this post, which highlights work with Christian Burrell and Jens Eisert, I’d like to talk about disordered quantum systems again. In particular, I’d like to discuss the problem of what is meant by “localisation” for a strongly interacting quantum system. I’d also like to investigate the role of disorder in how information propagates through interacting quantum systems.

The purpose of this post is to introduce (or, rather, to emphasise) a way to quantitatively discuss the phenomena of localisation in strongly interacting quantum systems. The approach I’d like to propose here exploits ideas from the theory of quantum noise to approximate the dynamics of disordered quantum systems. To discuss this proposal I want to begin by reviewing the theory of information propagation through interacting quantum systems. Then I’ll show how ideas from the theory of quantum noise can be successfully exploited to calculate properties of the discrete Anderson model. I’ll then conclude by showing how this approach gives a definite prediction for what the dynamics of a disordered strongly interacting quantum system should look like. I have now spent a little time studying the literature on disordered systems and I haven’t been able to find anything that applies quantum noise techniques to disordered system in quite the way I describe here, but I may not have used the correct search phrases; any omission is therefore due to my ignorance. (I have found one article which appears to exploit related ideas, but this approach appears to throw out more terms than the one I describe here. Additionally, this approach seems to be difficult to generalise to strongly interacting lattice systems.)

1. Information propagation through quantum systems

The quantum systems I want to talk about in this post will be lattice systems. To give us something concrete let’s suppose that we have {n} quantum spin-{1/2} particles arranged in a line. Thus the hilbert space for our system is {\mathcal{H} = \mathbb{C}^{2^n}}. We further restrict our attention to systems which couple only neighbouring spins:

\displaystyle  	H = \sum_{j=1}^{n-1} h_j \ \ \ \ \ (1)

where {h_j \equiv \mathbb{I}_{1\cdots j-1}\otimes h_j \otimes \mathbb{I}_{j+2\cdots n}} is the interaction between spins {j} and {j+1}. I’ll write {[n] = \{1,2,\dots, n\}} for the set of lattice positions.

In recent years we’ve learnt quite a few general results about how systems like (1) evolve in time. One general result, which I’d like to highlight here, is a bound on how fast information can propagate through a low-dimensional lattice system. This result, the Lieb-Robinson bound, was first proved in 1972 and has now become a central tool in the study of strongly interacting lattice systems. What it says, roughly speaking, is that if you wiggle one end of an arbitrary lattice system then there is some velocity (call it the “speed of light”, or “speed of sound”, if you like) at which disturbances propagate throughout the system.

To state the result precisely we need to introduce some notation: let {A} and {B} be hermitian operators on our lattice system (observables). These represent observations that we might make on the system at different times. We allow {A} and {B} to be totally arbitrary as long as: (i) the supports of {A} and {B}, which are the sets of spins on which {A} and {B} act nontrivially, are bounded; (ii) the norm of {A} and {B} is bounded, e.g., {\|A\|_\infty \le 1} and {\|B\|_\infty\le 1}. An example of an allowed {A} operator might be {A = \sigma_1^z\sigma_2^y}, where {\sigma_j^\alpha} are the Pauli sigma matrices on spin {j}. The support of this example is {\{1,2\}}. An operator which wouldn’t be allowed would be, eg., {A=\sigma_1^z\sigma_2^z\cdots \sigma_n^z} because it has support {[n]}, which is not bounded as {n} is allowed to increase. With this notation in hand we can now state the Lieb-Robinson bound:

\displaystyle  	\|[A(t), B]\| \le c_0 e^{-vd(A,B)+k|t|}, \ \ \ \ \ (2)

where {c_0, v, k} are {O(1)} constants, {A(t) = e^{-itH}Ae^{itH}}, and {d(A,B)} is the distance between the two sets {\mbox{supp}(A)} and {\mbox{supp}(B)} measured by the number of steps you need to take along the lattice from {A} until you hit {B}.

Don’t try too hard to interpret (2) physically: the bound is useful not for its direct physical interpretation, rather that it supplies bounds on dynamical correlation functions. For example, the Lieb-Robinson bound easily supplies us with the following result:

\displaystyle  	\langle A(t) B\rangle \approx \langle A(t)\rangle \langle B\rangle + c_0 e^{-vd(A,B)+k|t|}, \ \ \ \ \ (3)

that is, for timescales short in comparison to the physical separation between the different observations, the dynamical correlator between {A} and {B} factorises. In other words, if you measure the observable {A} at time {0} at some location then this measurement will, in general, disturb the system, but it will take time {t} roughly proportional to the separation {d(A,B)} between {A} and {B} before this disturbance will affect the measurement statistics of the observation of {B} a time {t} later.

Even though this is physically “obvious” it takes some proving; there are now many strengthenings, improvements, and extensions of the original proof. A quick intuitive guide to one proof strategy can be found here.

I can’t resist mentioning a really nice way to sharpen the intuition concerning the Lieb-Robinson bound and information propagation: it turns out that the L-R bound can actually be used to show that, no matter how cleverly you initialise the system, and no matter how cleverly you encode things by engineering the operator {A} at one end of the chain, the classical information you can extract by measurement of {B} at the other end of the chain after a constant time is also exponentially suppressed in the separation between {A} and {B}. This wonderful result is due to Bravyi, Hastings, and Verstraete, and can be found here.

Results such as the Lieb-Robinson bound are like gold in the study of strongly interacting systems: it gives you general nontrivial control over arbitrary quantities for arbitrary systems. That is, the result is not at all specific to, say, just the heisenberg model but is true for any strongly interacting lattice system, so long as it only has short-ranged interactions.

2. The effect of disorder on information propagation

Let’s suppose we live in the real world now. No matter how careful and clever we are, if we try and fabricate a system so that it has ideal interactions like (1) then we are doomed to fail; there will always be some dirt, stray magnetic fields, and other sources of disorder present. Thus, real hamiltonians look more like

\displaystyle  	H(\mathbf{x},t) = \sum_{j=1}^{n-1} h_j + K(\mathbf{x},t) \ \ \ \ \ (4)

where {K} is the (possibly time-dependent) hamiltonian which models all the noise and dirt and which depends on {\mathbf{x} = (x_1, x_2, \ldots, x_m)} which are some bunch of random variables (also possibly depending on time) encapsulating all the disorder. It is usual to make some assumptions about {K}: in this post I’ll assume that the disorder can be modelled (very specifically) as

\displaystyle  	K(\mathbf{x},t) = \sum_{j=1}^{n} x_j\sigma_j^z, \ \ \ \ \ (5)

i.e., as a time-independent random magnetic field. This is the setting of quenched disorder. I’ll further assume that {x_j} are independently and identically distributed according to the gaussian distribution with variance {\mu}.

One might think that the addition of a little disorder shouldn’t have much of an effect of the dynamics of a system like (1). Surprisingly this is not the case: in one dimension it is now understood that the addition of an arbitrarily small (constant) amount of noise should have a dramatic effect on the eigenvalues and eigenstates of the system: the system will exhibit Anderson localisation (see my post here for a brief description of this phenomena). In two and higher dimensions this is not necessarily the case: there is a noise threshold below which the system still supports extended eigenstates.

Now what I’ve said is really only known to hold for the discrete Anderson model

\displaystyle  	H_{AM} = \sum_{j} |j+1\rangle\langle j| + \mbox{h.c.} + \sum_{j=1}^n x_j |j\rangle\langle j|, \ \ \ \ \ (6)

and (very many!) relatives, which usually pertain to a single scalar particle hopping in a lattice subject to a disordered potential ({x_j} are i.i.d. random variables). It is worth comparing the hilbert space of a single scalar particle in a lattice with {n} sites with that of {n} quantum spins. The former is given by {\mathbb{C}^n} and the latter by {\mathbb{C}^{2^n}}, i.e., {n} quantum spins can explore exponentially many more configurations than a single scalar particle. Promisingly, there are extensions of the localisation phenomena to strongly interacting systems: see eg. this paper.

There are several was to quantify Anderson localisation. The most typical characterisations focus on the properties of the eigenstates. Another way, which is closer to what I’d like to highlight in this post, is to focus on the properties of the propagator {e^{itH_{AM}}}. In terms of the propagator, Anderson localisation is the result that, for almost all realisations of the random potential {\{x_j\}},

\displaystyle  	|\langle j|e^{itH_{AM}}|k\rangle| \le c_0 e^{-v d(j,k)}. \ \ \ \ \ (7)

This result can be reinterpreted as saying that time-dependent correlation functions approximately factorise for all time:

\displaystyle  	\langle A(t) B\rangle \approx \langle A(t)\rangle \langle B\rangle + c_0 e^{-vd(A,B)}, \ \ \ \ \ (8)

where now {A} and {B} are operators which act nontrivially on sites within a set {\mbox{supp}(A)} (respectively, {\mbox{supp}(B)}) and {d(A,B)} is the distance between these two sets. Compare this with the result supplied by the Lieb-Robinson bound (which also holds for scalar particles) (3).

3. The problem of localisation for strongly interacting systems

Anderson localisation is usually studied in the context of free particles, or weakly interacting particles. In both these settings it is usually evident what the actual quasiparticles of the system are, and it is easy to say what is meant by localisation of the eigenstates of these particles. I would imagine that many researchers in disordered systems would use the following workflow: (i) identify a set of scalar-particle type excitations whose clean properties can be well quantified; (ii) quantify how the presence of disorder effects the scalar particles; then (iii) apply the green-function type arguments/quantum field theory methods to understand the localisation of the eigenstates of the excitations of the system. This workflow is intricate and usually involves some pretty powerful tools like the supersymmetric method.

But what should we do if the system is strongly interacting and we have absolutely no idea, not even numerically, of what the excitations of the system are? One approach might be to work in terms of interacting particles, rather than trying to understand the system’s quasiparticles, but now this makes the whole workflow I described above totally horrible: the eigenstates of the system are massively entangled and it is a nightmare to say what is being localised in an eigenstate… The core problem is that while we can talk of the localisation of a single particle easily, i.e., the eigenstate is concentrated at some location and tails off rapidly, it is very difficult to quantify what is meant by “tailing off rapidly” when you have, say, some entangled state of {n} particles in a lattice of {2n} sites (which is what a strongly interacting quantum spin system is equivalent to). Eg. suppose the system was in the eigenstate

\displaystyle  	\frac{1}{\sqrt2}|00\cdots0\rangle + \frac{1}{\sqrt2}|11\cdots1\rangle, \ \ \ \ \ (9)

(here we are working in the occupation number basis where {|m_1m_2\cdots m_n\rangle} means that there are {m_j} particles at lattice site {j}). Is this state localised? It can never be a product of single-particle wavefunctions, localised or otherwise, but it does involve superpositions products of localised states of single particles… Such states (and much much worse) can easily arise as eigenstates of a strongly interacting system.

In this situation I’d like to propose that a natural way to study the problem is to instead focus on the dynamical properties of the system and aim to prove something like: “the time-dependent correlation functions of local operators always approximately factorise for all time”. This could be easiest to quantify in terms of a Lieb-Robinson-type bound. Thus I’d like to propose the following Consider the system

\displaystyle  	H(\mathbf{x},t) = \sum_{j=1}^{n-1} h_j + \sum_{j=1}^{n} x_j\sigma_j^z, \ \ \ \ \ (10)

then there exists a threshold variance {\mu} above which, for almost all realisations of {\{x_j\}},

\displaystyle  	\|[A(t), B]\| \le c_0 e^{-vd(A,B)}, \ \ \ \ \ (11)

where {A} and {B} are arbitrary local observables. This is not such a surprising conjecture in the context of the discussion above, and I would hope it is not too controversial. However, the point is not the surprising physical content of the conjecture, but rather that the conjecture shifts the emphasis away from the properties of the eigenstates of the system, which are typically unstable with respect to perturbations, to dynamic properties of local operators, which are much more stable with respect to perturbations.

Now is there any reason to think that conjecture 1 is any easier to study than rather just studying the localisation of many-particle eigenstates? Well, I hope to convince you that the answer is yes.

The first piece of evidence is that there is at least one interacting system which does (almost) satisfy (11), namely the {XY} model in a random magnetic field:

\displaystyle  	H_{XY} = \sum_{j=1}^{n-1} \sigma_j^x\sigma_{j+1}^x + \sigma_j^y\sigma_{j+1}^y +\sum_{j=1}^n x_j\sigma_j^z. \ \ \ \ \ (12)

(Now while the {XY} model is equivalent to a model of free fermions via the Jordan-Wigner transformation, the JW transform totally mangles local operators into nonlocal operators on fermions, which is why there was still something to prove.)

The second piece of evidence comes from an approach inspired by the quantum theory of noise in quantum optics.

4. The theory of quantum noise and disordered quantum systems

In this section I aim to convince you of a physical metaphor: if a system is exposed to a disordered magnetic field then the dynamics of the system are “equivalent” to that of the continuous measurement of the magnetic field on the clean system. Thus, according to this interpretation, the reason the system dynamically localises is because of the quantum Zeno effect: a continuously measured quantum system will be “frozen” into an eigenstate of the measured observable.

In order to convince you of this interpretation I’m first going to show how the techniques from the theory of quantum noise can actually provide nontrivial results about dynamical localisation in the Anderson model. Hopefully, once I’ve showed how these techniques can successfully calculate thermodynamic quantities then you’ll be willing to believe that the approach will be applicable in general. I’m being careful here: ideally I’d like the results here to be at the level of mathematical rigour. But, even at the level of physical rigour, there are nontrivial justifications required.

Throughout most of the remainder of this section I’ll discuss the Anderson model for a 1D *ring* of {n} sites (although what I say easily extends to 2D and greater):

\displaystyle  	H_{AM}(\mathbf{x}) = \sum_{j} |j+1\rangle\langle j| + \mbox{h.c.} + \sum_{j=1}^n x_j |j\rangle\langle j|, \ \ \ \ \ (13)

where addition is modulo {n}, i.e., we identify {1\equiv n}.

The quantities I want to discuss here are averages, i.e., the dynamical properties of {H_{AM}} averaged over all realisations of the noise. The first nontrivial such quantity is the averaged propagator:

\displaystyle  	S(t) = \mathbb{E}_{\mathbf{x}}[e^{itH_{AM}(\mathbf{x})}]. \ \ \ \ \ (14)

Via a fourier transform of {S(t)} in time one can easily estimate the local density of states:

\displaystyle  	\langle \rho(\omega, j) \rangle = \frac{1}{{2\pi}}\int_{-\infty}^\infty e^{it\omega} S^\dag(t) dt

\displaystyle 	= \mathbb{E}_{\mathbf{x}}\Bigg[ \sum_{k=1}^{n} \langle j | \phi_k (\mathbf{x})\rangle \langle \phi_k (\mathbf{x})|j\rangle \delta(\omega - E_k(\mathbf{x}))\Bigg]

where {E_k(\mathbf{x})} and {|\phi_k(\mathbf{x}\rangle} are the {k}th eigenvalue and eigenvector of {H_{AM}(\mathbf{x})}. One can obtain the density of states {\mu(\omega)} via {\mu(\omega)=\frac{1}{n}\sum_{j=1}^n\langle \rho(\omega, j) \rangle}.

Via a Laplace transform of {S(t)} we can extract the averaged Green’s function:

\displaystyle  \begin{array}{rcl}  	\langle G_\pm(z) \rangle &= i\mathcal{L}[S(t)](iz\pm\delta) = i\int_0^\infty e^{-(iz\pm\delta)t}\mathbb{E}_{\mathbf{x}}[e^{itH}] dt \\ 	&= \mathbb{E}_{\mathbf{x}}\Big[\sum_{k=1}^n \frac{1}{z\pm i\delta + E_k(\mathbf{x})}|\phi_k(\mathbf{x})\rangle\langle \phi_k(\mathbf{x})|\Big] \end{array}

It turns out that modifications of {S(t)} will allow us to calculate other thermodynamic quantity of interest: consider

\displaystyle  	M(t, u) = \mathbb{E}_{\mathbf{x}}[e^{itH_{AM}(\mathbf{x})}\otimes e^{iuH_{AM}(\mathbf{x})}]. \ \ \ \ \ (15)

Taking the fourier transform of {M(t,u)} separately in {t} and {u} will allow us to calculate higher moments {\langle \rho^2(\omega, j) \rangle} of the local density of states. The method of the next section will be applicable both to {S(t)} and {M(t,u)} and, in a fairly obvious way, to higher “moments”.

Another quantity of central interest which our approach will apply to is the density operator of the system averaged over the disorder:

\displaystyle  	\rho(t) = \mathbb{E}_{\mathbf{x}}[e^{itH_{AM}(\mathbf{x})}\rho(0) e^{-itH_{AM}(\mathbf{x})}]. \ \ \ \ \ (16)

This, of course, contains all the observable information about the system.

4.1. A “master equation” for dynamical localisation

The first idea I want to propose in order to calculate approximations to {S(t)} is to study not {H_{AM}} but rather the following hamiltonian

\displaystyle  	H(\widehat{\mathbf{x}}) = T + \sum_{j=1}^n \widehat{\mathbf{x}}_j \otimes |j\rangle\langle j|, \ \ \ \ \ (17)

i.e., we introduce {n} ancilla continuous quantum degrees of freedom (harmonic oscillators, if you like) and promote the number {x_j} to the quantum position operator {\widehat{\mathbf{x}}_j} for the {j}th mode.

We now define the following quantum state of the ancilla modes, which encodes the probability distribution for {\mathbf{x}}:

\displaystyle  \begin{array}{rcl}  	|G\rangle &= \int \sqrt{p(\mathbf{x})}|\mathbf{x}\rangle d\mathbf{x}\\ 	&= \frac{1}{(\pi\mu)^{n/4}}\int e^{-\frac{\|\mathbf{x}\|^2}{2\mu}}|\mathbf{x}\rangle d\mathbf{x} \end{array}

Now the following lemma shows us that we can realise {S(t)} via unitary dynamics on the lattice+ancilla system.

Lemma 1

\displaystyle  		S(t) = \mathbb{I}\otimes \langle G| (e^{itH(\widehat{\mathbf{x}})}) \mathbb{I}\otimes |G\rangle. 	\ \ \ \ \ (18)

Proof: Notice that the operators {\widehat{\mathbf{x}}_j} commute with the hamiltonian, so writing out the taylor series for {e^{itH(\widehat{\mathbf{x}})}} and taking the expectation gives us

\displaystyle  		\frac{1}{(\pi\mu)^{n/2}}\int e^{-\frac{\|\mathbf{x}\|^2}{\mu}} e^{itH_{AM}(\mathbf{x})} d\mathbf{x} = S(t). 	\ \ \ \ \ (19)

\Box

Thus we learn that {H(\widehat{\mathbf{x}})} encodes all the properties of {H_{AM}(\mathbf{x})}. Now the hamiltonian {H(\widehat{\mathbf{x}})} describes the dynamics of the clean system interacting with {n} ancilla systems which are subsequently traced out. This is the generic setting of decoherence in quantum systems. In our case the dynamics generated by {H(\widehat{\mathbf{x}})} are reminiscent of the continuous measurement of {\widehat{x}_j} on the clean system (there are some important differences though). We’re going to exploit this “moral” connection (tenuous though it is) to derive an evolution equation for {S(t)}; the derivation here mirrors the derivation of the master equation in the quantum noise literature.

The first step is to study {W(t) = e^{-itT}e^{itH(\widehat{\mathbf{x}})}}; writing {V = \sum_{j=1}^n \widehat{\mathbf{x}}_j \otimes |j\rangle\langle j|}:

\displaystyle  	\frac{dW(t)}{dt} = iV(t)W(t), \ \ \ \ \ (20)

where {V(t) = e^{-itT}Ve^{itT}}. Integrating both sides gives us the integral equation

\displaystyle  	W(t) = W(0) + i\int_0^t dt_1 V(t_1)W(t_1). \ \ \ \ \ (21)

Iterating this equation gives us the Dyson series:

\displaystyle  	W(t) =

\displaystyle  	\mathbb{I} + \sum_{j=1}^\infty (i)^j \int_0^tdt_1\cdots \int_0^{t_{j-1}}dt_j V(t_1)V(t_2)\cdots V(t_j), \ \ \ \ \ (22)

where we’ve used the fact that {W(0) = \mathbb{I}}. We now trace out the ancilla modes by taking the expectation with respect to {|G\rangle}:

\displaystyle  	Y(t) = \mathbb{I} +

\displaystyle 	 \sum_{j=1}^\infty (i)^j \int_0^tdt_1\cdots \int_0^{t_{j-1}}dt_j \mathbb{I}\otimes \langle G|(V(t_1)V(t_2)\cdots V(t_j))\mathbb{I}\otimes |G\rangle,

where {Y(t) = e^{-itT}S(t)} The next step is to write

\displaystyle  	Y(t) = \sum_{j=0}^\infty L_j(t), \ \ \ \ \ (23)

where

\displaystyle  	L_j(t) =

\displaystyle  	(i)^j \int_0^tdt_1\cdots \int_0^{t_{j-1}}dt_j \mathbb{I}\otimes \langle G|(V(t_1)V(t_2)\cdots V(t_j))\mathbb{I}\otimes |G\rangle.

Thus

\displaystyle  	\frac{dY(t)}{dt} = \left(\sum_{j=0}^\infty \frac{dL_j(t)}{dt}\right) Y^{-1}(t) Y(t) \ \ \ \ \ (24)

We now make the main approximation of this entire approach, in fact, this is the only approximation we are going to make. The idea is to imagine that the variance {\mu} of the disorder is a small parameter so that we can neglect all terms involving more than two {V(t)}s. To this order {Y^{-1}(t) \equiv \mathbb{I}}. Noticing that {\mathbb{I}\otimes \langle G|(V(t_1)V(t_2)\cdots V(t_j))\mathbb{I}\otimes |G\rangle = 0} when {j} is odd thus leaves us with

\displaystyle  	\frac{dY(t)}{dt} = -\int_0^t dt_1 \mathbb{I}\otimes \langle G|(V(t)V(t_1))\mathbb{I}\otimes |G\rangle Y(t). \ \ \ \ \ (25)

Carrying out the requisite integrals gives us

\displaystyle  	\frac{dY(t)}{dt} = -\frac{\mu}{2}\sum_{j=1}^n\int_0^t dt_1 e^{-itT}|j\rangle\langle j|e^{i(t-t_1)T}|j\rangle\langle j|e^{it_1T} Y(t). \ \ \ \ \ (26)

We can simplify the summation over {j} to give us

\displaystyle  	\frac{dY(t)}{dt} = -\frac{\mu}{2}\int_0^t dt_1 e^{-itT}\mbox{diag}(e^{i(t-t_1)T})e^{it_1T} Y(t). \ \ \ \ \ (27)

Now we know that {Y(t)} is diagonal in the basis

\displaystyle  	|W(k)\rangle = \frac{1}{\sqrt{n}}\sum_{j=1}^{n} e^{\frac{2\pi}{n} jk} |j\rangle \ \ \ \ \ (28)

so we may as well study

\displaystyle  	y_k(t) = \langle W(k)|Y(t)| W(k)\rangle. \ \ \ \ \ (29)

These coefficients satisfy the integral equation

\displaystyle  	\frac{dy_k(t)}{dt} = -\frac{\mu}{2} \int_0^t dt_1 \alpha(t-t_1) e^{-2i(t-t_1)\cos(2\pi k/n)} y_k(t), \ \ \ \ \ (30)

where

\displaystyle  	\alpha(t-t_1) = \frac1n\sum_{k=1}^{n} e^{2i(t-t_1)\cos(2\pi k/n)} \ \ \ \ \ (31)

and we’ve used the fact that {T|W(k)\rangle = 2\cos(2\pi k/n)|W(k)\rangle}. We’ll be soon taking the limit {n\rightarrow\infty}. In preparation of this it is convenient to already approximate {\alpha(t-t_1)}:

\displaystyle  	\alpha(t-t_1) = J_0(2(t-t_1)) + O(1/n), \ \ \ \ \ (32)

where {J_0(x)} is the Bessel function of the first kind. (This is pretty standard fact which follows from using approximating the sum as an integral and using the Jacobi-Anger expansion.)

So our integro-differential for {y_k(t)} is now

\displaystyle  	\frac{dy_k(t)}{dt} = -\frac{\mu}{2} \int_0^t dt_1 J_0(2(t-t_1)) e^{-2i(t-t_1)\cos(2\pi k/n)} y_k(t) \ \ \ \ \ (33)

A change of variable simplifies the equation to

\displaystyle  	\frac{dy_k(t)}{dt} = -\frac{\mu}{2} y_k(t) \int_0^t dt_1 J_0(2t_1) e^{-2it_1\cos(2\pi k/n)} \ \ \ \ \ (34)

Thus

\displaystyle  	\frac{d\log(y_k(t))}{dt} = -\frac{\mu}{2} \int_0^t dt_1 J_0(2t_1) e^{-2it_1\cos(2\pi k/n)} = -\frac{\mu}{2} g(t) \ \ \ \ \ (35)

so that

\displaystyle  	y_k(t) = e^{-\frac{\mu}{2} \int_0^tdt_1g(t_t)}. \ \ \ \ \ (36)

Now we really want {S(t) = \sum_{k=1}^n s_k(t)|W(k)\rangle \langle W(k)|}, but note that {y_k(t) = e^{-2it\cos(2\pi k/n)}s_k(t)}, so that

\displaystyle  	s_k(t) = e^{2it\cos(2\pi k/n)-\frac{\mu}{2} \int_0^tdt_1g(t_t)}. \ \ \ \ \ (37)

Now we’ll be able to calculate the density of states from the trace {\frac{1}{n}\mbox{tr}(S(t))}, so we now focus on {\frac{1}{n}\mbox{tr}(S(t)) = \gamma(t) = \frac{1}{n}\sum_{k=1}^n s_k(t)}:

\displaystyle  	\gamma(t) = \frac{1}{n}\sum_{k=1}^n e^{2it\cos(2\pi k/n)-\frac{\mu}{2} \int_0^tdt_1g(t_t)}. \ \ \ \ \ (38)

We now finally take the limit {n\rightarrow \infty}: replacing the sum with an integral we obtain

\displaystyle  	\gamma(t) =

\displaystyle  \frac{1}{2\pi} \int_0^{2\pi} d\phi \exp\left(2it\cos(\phi) - \frac{\mu}{2}\int_0^tdt_1\int_0^{t_1}dt_2 J_0(2t_2)e^{-2it_2\cos(\phi)}\right).

This looks pretty ghastly, but at least we’ve reduced the problem to quadratures. Actually it is relatively tractable: we really only care about timescales {t \ll 1/\mu^2}, because the approximation we made above is really only valid on these timescales. For such timescales we can approximate the Bessel function as {J_0(2t) \sim 1} etc. Thus, to second order in {t} in the exponent, the expression looks like

\displaystyle  	\gamma(t) \approx \frac{1}{2\pi} \int_0^{2\pi} d\phi \exp\left(2it\cos(\phi)-\frac{\mu t^2}{4}\right) \ \ \ \ \ (39)

which reduces to

\displaystyle  	\gamma(t) \approx J_0(2t) \exp\left(-\frac{\mu t^2}{4}\right). \ \ \ \ \ (40)

Fourier transforming this gives us the density of states, and is an “inverse semicircle”

\displaystyle  j(k) = \frac{1}{\pi\sqrt{1-k^2/4}}, \ \ \ \ \ (41)

for {-2\le k \le 2}, convolved with a gaussian of variance {\mu}, with some corrections of higher order in {\mu} — which is what it “should” be. Obviously one could spend longer investigating the asymptotics of these expressions, and maybe one should, because this represents another way to calculate the density of states for the Anderson model in the presence of arbitrary disorder. (The other techniques I’m aware of being the field theory approaches using the replica trick/supersymmetric method, and the coherent potential approximation (CPA))

While I’ve only discussed the 1D discrete Anderson model there is really no obstruction to applying the above argument to the 2D model etc., and it is easy to include spin and magnetic field effects as well.

How well does the approximation used here compare with existing methods? To discuss this we need to understand how other methods approximate the averaged green’s function {G^{-1} = z\mathbb{I} - T-V}. The simplest approach, and the one that is often adopted to study the effect of small disorder, is to use the series

\displaystyle  	G(z) = G_0(z) + G_0(z)VG_0(z) + G_0(z)VG_0(z)VG_0(z) + \cdots, \ \ \ \ \ (42)

where {G_0(z) = (z\mathbb - T)^{-1}}, and then to average over the disorder:

\displaystyle  	\langle G(z) \rangle = G_0(z) + \langle G_0(z) V G_0(z) \rangle+ \langle G_0(z) VG_0(z)V G_0(z)\rangle + \cdots, \ \ \ \ \ (43)

This series can be represented diagrammatically with lines and crosses. Now, much effort has been spent in summing infinite subsets of terms in this expansions, and many powerful results have been derived in this way.

The Dyson series (22) we used to derive the evolution equation is, after a laplace transform, exactly the series (42). Thus before we make our approximation we are summing every term in the series. After we make the approximation giving rise to the evolution equation (24) we can compare the series solution it gives rise to with the series (43). What you’ll find is that every term in the series is represented but the their coefficients differ from the exact series at higher orders. Thus our approach is likely to provide good results in comparison to approaches based on summing subsets of the terms.

Indeed, because we can actually (with a bit of work) bound the error we make, we can exactly and analytically provide bounds on the error we make in calculating the density of states! I’ll leave this highly technical process to another post, or another life…

Thus we’ve shown that a master-equation type approach can give good results for a thermodynamic quantity of interest, namely the density of states. I intend to work on calculating other quantities and comparing them with known results about the discrete Anderson model; perhaps this approach gives an alternative to existing methods which may be simpler to apply in some situations.

5. A master equation for disordered strongly interacting systems

Applying the derivation described in the previous section to our original hamiltonian

\displaystyle  	H(\mathbf{z},t) = \sum_{j=1}^{n-1} h_j + \sum_{j=1}^{n} x_j\sigma_j^z, \ \ \ \ \ (44)

where now {T = \sum_{j=1}^{n-1} h_j} and {V = \sum_{j=1}^{n} x_j\sigma_j^z}, gives us an evolution equation for the averaged density operator of the form

\displaystyle  	\frac{d\rho(t)}{dt} = -\frac{\mu}{2} \sum_{j=1}^{n}\int_0^t [\sigma_j^z(t), [\sigma_j^z(t_1), \rho(t)]] dt. \ \ \ \ \ (45)

This equation is temptingly similar to the equation

\displaystyle  	\frac{d\rho(t)}{dt} = -\frac{\mu}{2} \sum_{j=1}^{n}[\sigma_j^z(t), [\sigma_j^z(t), \rho(t)]], \ \ \ \ \ (46)

which models a continuously-measured clean lattice system. (Indeed, if one further makes the so-called Markov assumption then one can derive exactly this equation from (45).) Now Christian Burrell, Jens Eisert, and myself showed that evolution equations of this form do give rise to improved Lieb-Robinson bounds, at least for the {XY} model. However, the bounds we obtain do not predict localised behaviour, rather, just diffusive behaviour.

Of course, there is no reason to think that a Markovian master equation of the form (46) has anything to do with disordered systems — the Markov approximation seems to be an approximation too far as it predicts only diffusive behaviour rather than localised behaviour, but the analogy is certainly there. Unfortunately, we must deal with the memory effects of (45). This is a subject for another post…

Advertisements

One Response to Information propagation through disordered quantum systems

  1. […] the dynamics of the basic discrete 1D Anderson within the approximation developed in the previous post. As we’ll see, this can be reduced to the solution of a partial integro-differential […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: