At the moment I am in between research projects: I am “working” on a bunch of old projects, some of which are many years old, but I haven’t had any new ideas for any of them in a long time and, hence, I haven’t made any progress whatsoever. At the same time I am thinking about beginning work on some new projects. Most notably, I want to spend some time understanding quantum systems with static and annealed disorder, and the connections between these systems and computational complexity. Unfortunately the literature on disordered quantum systems is vast, to say the least. Hence, I am putting off learning it. So now I am procrastinating by learning about a whole bunch of new ideas in the hope of learning something that will make the entry into the disordered systems literature a little smoother.

Basically I am now going to play out my learning curve through this blog.

The type of problems I eventually hope to study will be of the following form. Take some family of computational problems , and consider a random instance from this family. What is the running time, on average, of some quantum algorithm to solve the problem? At the same time I’d also like to consider families of quantum problems (eg. a family of quantum systems) and understand the running time, on average, of *either* classical or quantum algorithms to calculate approximations to, eg., expectation values of local observables, of a random instance. In both cases there is obviously some quantum system (i.e., the quantum computer in the first case, and the quantum system studied in the second case), and there is obviously some disorder present. The hope, broadly speaking, is to exploit the ubiquitous phenomenon of Anderson localisation to understand what happens in each case.

However, except in some very special cases, the problems I want to study don’t reduce in any obvious way to systems studied in the disordered quantum systems literature (at least, not so far as I can see). So I’m now casting around looking for interesting stuff which might have some relevance…

At the most abstract and high level all of the problems I want to eventually consider require that one understands the critical points of a random function (which is usually related to the running time). With a bit of luck one will be able write this expression as a polynomial. Hence it’d be nice to understand, say, the roots of random polynomials.

**1. Real roots of random polynomials **

In this post I want to discuss an old paper of M. Kac which considers the following problem:

Consider the polynomial

In general such a polynomial will have a bunch of real roots and complex roots. Suppose that the numbers , are chosen randomly according the gaussian distribution, i.e., suppose that the probability density function for each is

Then the question we want to answer is: how many real roots, on average, of are there? Apparently this problem was first studied by Littlewood and Offord, who consider the case where are uniformly distributed in .

Here is an example of a random polynomial of degree 100 on the interval :

Denote by the number of real roots of . What M. Kac actually proved is the following

Theorem 1Let be distributed as in (2). Then

Further, behaves, asymptotically, as

Pretty impressive really. Note that this says there are not many real zeroes: even if the degree is 1000 there will, on average, be about 4 or 5 real zeros!

Now this result was the state of the art in 1948. Obviously mathematics has moved on since then: there is now a vast literature on roots of random polynomials, and I can’t hope to scratch the surface here. But what I want to achieve is to understand the basic techniques exploited in this paper (the Kac-Rice formula), which have since gone on the form the cornerstone of this body of theory.

The proof of this theorem is sufficiently elementary that I can actually discuss it in full here.

**2. The Kac-Rice formula **

The central tool exploited in Kac’s paper is what is now known as *Kac-Rice* formula. This is a really neat trick and it is definitely worth explaining its derivation here. For this section I’m going to follow the heuristic derivation presented in the book of Adler.

Let be a continuous function in having continuous first derivative and only a finite number of turning points in each finite interval. Then we have the following

Lemma 2If neither nor is a zero of then, defining

we have that

where is the Dirac delta function.

*Proof:* (this proof is at the level of “physical rigour”, i.e., it is heuristic. You can add in the excruciating detail required to make it rigourous yourself! We shall later, in the words of Adler, “commit serious crimes” including the exchange orders integration, etc. 😉 ) The key to the proof is to notice that

Now suppose that is a small interval centred on a point where . Then a simple change of variables gives us (beware: we’re treating as though it were a smooth function with compact support that tails off to zero either side!):

The next step is to concatenate all such intervals and note that there is no contribution to the following integral outside of them:

and we are done!

This lemma has since been generalised in a vast number of obvious and non-obvious ways. See, eg., Adler’s book for details.

**3. Applying the Kac-Rice formula to random polynomials **

Lemma 3If is a polynomial of degree , then

*Proof:* There are only at most roots of a degree polynomial, hence the bound.

Now we turn to the calculation of

The trick here is to egregiously change the order of integration and consider

Thus we are reduced to understanding

This expression involves the distribution of and . To evaluate this we use the following lemma

Lemma 4 (Bernstein limit result)If , , , are real numbers with , , and if , then the joint probability distribution function of and is equal to

where , are distributed according to .

*Proof:* This follows from a limit theorem of Bernstein (basically one just applies a central limit theorem-type argument to the characteristic function and keeps track of the mutual independence of and …) BTW: does anyone know a more modern reference than this?

**4. Completing the proof **

Now we know the joint distribution function for and we are almost done: *Proof:* (of main theorem). Use the Bernstein limit lemma to write

where now , , and . This reduces to

Carrying out this integral gives us

Now, an elementary summation gives us that

Substituting this into our original integral (11) gives us

The actual result follows from an “elementary” transformation… The asymptotic result is a bit more tedious, and I’ll leave that to you to chase up.

**5. Extensions and applications **

One can imagine a zillion variations on this theme. I won’t explore them here. I just want to comment on what is actually needed by the above argument. Basically if one has a family of functions for which the joint probability density function for the tuple for any given is known then one can get at the number of real solutions within some interval. This is rather interesting in the context of disordered systems: suppose we have the Anderson model

Write for the characteristic polynomial. If we know the number of zeros of in an interval then we can obtain the averaged eigenvalue density function. Applying the method of Kac leads us to the consideration of the joint probability density function of . Now, this looks tricky, but we do have the following recurrences, which might help:

and

To finish off the analysis we’d need to derive a Bernstein-like limit theorem for using these recurrences. I cant see how to do this (if it is even possible!)…

BTW: has the *averaged* eigenvalue density function for the Anderson model ever been calculated at the level of mathematical rigour? I haven’t found any reference for such a thing…

I’m pretty sure what you want to know about the 1d Anderson model has been done very carefully. Not just the averaged eigenvalue density, but also the localization length. Take a look maybe at the book “Schrodinger Operators”, by Cycon, H.L., Froese, R.G., Kirsch, W., Simon, B. on the section on point spectrum of the Anderson model. If it isn’t there, it’s definitely done elsewhere.

Dear Matt,

Many thanks for the reference! I’ll check it out.

I’ve had a hard time reconciling the mathematical literature with the physical: I’m certainly aware that in the mathematical literature the spectrum of many random schroedinger operators with enough disorder have been proven to have a pure point spectrum, almost always, for realisations of the disorder. In the physical literature the supersymmetric method is good at obtaining averaged quantities. But I’m far from familiar with the interface between these two bodies of theory, and any statements I make about this are purely based on ignorance…

Sincerely,

Tobias

I remember seeing one paper (can’t remember the author, any details, anything) which was (in my very rough recollection) claiming to give a mathematical proof of localization in 1d using supersymmetry techniques. This required I think lots of use of tools of analysis to estimate errors in the physics calculation.

However, in general I think you’re right: other than this one paper, the math and physics literatures seem to use very disjoint tools on this problem.