QUEST special lecture 1

October 28, 2010

Yesterday I gave the first of three QUEST special lectures. You can find the slides here. Readers of this blog will be familiar with the content: I talked about the simulation problem and hamiltonian complexity and ended with the result that the dynamics of a 1D quantum spin system can be efficiently approximated (by a quantum cellular automaton) for |t| \sim \log(n).

In the next lecture I’ll show how to turn this result around and use quantum circuits to simulate the statics and dynamics of strongly interacting quantum systems via the variational principle.

Advertisements

Reading group: on the average number of real roots of a random algebraic equation, M. Kac

April 14, 2009

At the moment I am in between research projects: I am “working” on a bunch of old projects, some of which are many years old, but I haven’t had any new ideas for any of them in a long time and, hence, I haven’t made any progress whatsoever. At the same time I am thinking about beginning work on some new projects. Most notably, I want to spend some time understanding quantum systems with static and annealed disorder, and the connections between these systems and computational complexity. Unfortunately the literature on disordered quantum systems is vast, to say the least. Hence, I am putting off learning it. So now I am procrastinating by learning about a whole bunch of new ideas in the hope of learning something that will make the entry into the disordered systems literature a little smoother.

Basically I am now going to play out my learning curve through this blog.

The type of problems I eventually hope to study will be of the following form. Take some family of computational problems {\mathcal{F}}, and consider a random instance {P} from this family. What is the running time, on average, of some quantum algorithm to solve the problem? At the same time I’d also like to consider families {\mathcal{Q}} of quantum problems (eg. a family of quantum systems) and understand the running time, on average, of either classical or quantum algorithms to calculate approximations to, eg., expectation values of local observables, of a random instance. In both cases there is obviously some quantum system (i.e., the quantum computer in the first case, and the quantum system studied in the second case), and there is obviously some disorder present. The hope, broadly speaking, is to exploit the ubiquitous phenomenon of Anderson localisation to understand what happens in each case.

However, except in some very special cases, the problems I want to study don’t reduce in any obvious way to systems studied in the disordered quantum systems literature (at least, not so far as I can see). So I’m now casting around looking for interesting stuff which might have some relevance…

At the most abstract and high level all of the problems I want to eventually consider require that one understands the critical points of a random function (which is usually related to the running time). With a bit of luck one will be able write this expression as a polynomial. Hence it’d be nice to understand, say, the roots of random polynomials. Read the rest of this entry »


Hamiltonian complexity talk

April 1, 2009

On Tuesday I gave a talk at the IMA conference at the IMS. I’ve put the slides here (it is a huge file, alas, as I included lots of pictures…) 

In this talk I took the opportunity to popularise a problem that I hope will provide a fruitful avenue for future research in the emergent field of hamiltonian complexity.

Before I describe this problem, I’d like to say a couple of words about hamiltonian complexity itself: this field, which gained considerable momentum last year thanks to a couple of tightly focussed workshops at Leiden and Santa Fe, is (roughly speaking) aimed at exploring the interplay between theoretical physics and computational complexity theory. The central question of hamiltonian complexity, I would argue, is: “how hard is it to simulate a physical system?” To actually answer this question quantitatively requires that we be rather careful about what we mean by “physical system”, “simulation”, and “hardness”.

For “physical system” it has been convenient for quantum information theorists to focus on quantum spin systems (i.e., a set of n quantum spins arrayed in a low-dimensional lattice). Thus we say the size of such a system is n. (Note, however, that the dimension of the hilbert space of such a system scales at least as fast as 2^n.)  

For “simulation” we typically mean a classical computer carrying out some algorithm, although one could widen this definition. To count as a simulation the algorithm in question needs to be able to calculate a prediction for some physical observable.

For “hardness”, the field of computational complexity provides a now-standard framework to make this precise: one quantifies the temporal and spatial resources (measured in arithmetic operations and bits, respectively) required by a computation to carry out the desired task. 

With these ingredients in hand we can now state the main problem of hamiltonian complexity (well, of theoretical physics, really):

The simulation problem (equilibrium).

Input: A hamiltonian H for a system of size n; an observable A; a tolerance \epsilon; a temperature T.

Output: an approximation \alpha to the expectation value \langle A\rangle=\mbox{tr}(Ae^{-H/kT}/\mathcal{Z}) which satisfies |\alpha- \langle A \rangle| \le \epsilon.

The simulation problem, as stated, is simply the task of making a prediction for some observable. Hamiltonian complexity theory adds another dimension to the simulation problem by asking how hard it is to actually calculate such a prediction. This is clearly an important question: after all, suppose it took 2^n fundamental arithmetic operations to make a prediction for some experimentally accessible observable A. This would be a disaster; even for 50 spins, one would need to perform \approx2^{50} operations, which is clearly unfeasible. So we’d have to give up on making any predictions whatsoever for such systems!

Obviously physicists are very good at making predictions, so this nightmare scenario never really occurs in practice. Hamiltonian complexity is thus aimed at explaining why this is the case. This is far from trivial because it is now relatively straightforward to write down 1D quantum spin systems whose equilibrium properties really do require something like 2^n arithmetic operations to approximate to within fairly reasonable tolerances! (We heard a lot about the state of the art of these results on Tuesday.)

Hamiltonian complexity is broken up into many sub-fields aimed at understanding variations on the theme of simulation and complexity. There are many subtle and profound techniques that have been developed to construct semi-realistic systems whose physical properties are hard to approximate. And there is yet another set of approaches aimed at actually simulating real (and imaginary) physical systems.

My talk was aimed at describing some situations where we actually can make predictions about quantum spin systems. I described two different situations where physical properties can be efficiently (on a classical computer) predicted. Typically — and the results I described are no exception — a proof that a physical property can be efficiently simulated actually provides a provably efficient algorithm to calculate the prediction. 

What I feel is the most interesting open question now is to understand better the interplay between the observables A one wishes to simulate and the hamiltonian H to be simulated. I think this adds a new layer of richness to the complexity-theoretic landscape for quantum spin systems: most work to date has focussed on the LOCAL HAMILTONIAN PROBLEM, which is a special case of the SIMULATION PROBLEM, above (by only demanding predictions for the observable H itself). However, it is completely plausible that if you aren’t interested in the energy, but some other observable A, the LOCAL HAMILTONIAN PROBLEM may become easier or harder!


Translation-invariant quantum states

March 12, 2009

In making my research open I’ve continually encountered the difficulty of working out what to actually post. My typing speed doesn’t really match the speed at which I write down notes. In order to overcome this I’ve already censored most of the worst mistakes in my handwritten notes: i.e., I’ve spared you all of the crossed-out calculations where I made, eg., a minus-sign error and just written up the corrected notes. But apart from this you are pretty much getting what I’m thinking. There is an exception: I am involved in several projects where my co-authors have, for good reasons, requested I not post on them.

In writing my notes for a wider audience I also attempt to preface each post with some kind of general discussion. This seems to be a useful device, not unlike when using del.icio.us where you are forced to write a description. These little extra tasks seem like a useful mental filing device. Also, I think of the prefaces and tags etc. as a way to enthuse a wider audience to work on the problems I’m working on. (Not entirely clear if this is working yet…)

In this post I’d like to talk about a problem that I’m not really working on, but plan one day to work on if I have an idea. I guess most researchers have these kind of “to do” lists of problems waiting for time/inspiration. I find that these problems take up a lot of mental space (even when I’m not thinking about them directly: eg. “I must remember to think about problem X”) and I’d like to experiment by posting about one of them here in an attempt to clear out this to-do list, so to speak.

The problem/idea I’d like to talk about today is principally motivated by a single figure (Figure 1) in a paper of Cirac and Verstraete. Read the rest of this entry »


(Matt Hastings) Adiabatic quantum computation with long paths

February 24, 2009

[This post is authored by Matt Hastings, who has kindly guest blogged this article on adiabatic quantum computation. – T.]

In this guest post, I want to talk about adiabatic quantum computing. But first, a couple comments on how this post fits into “open notebook” science. I don’t keep a detailed notebook. Instead, I have a bunch of random ideas, most of which (>95%) I discard very quickly. I’m going to write about one idea of these ideas. I think this idea could be promising, so this post is not about a completely discarded idea. On the other hand, I think that there are many people who are much more capable of getting something out of this idea than I am. So, I think this is a way that I can use “open notebook” science to see if anyone else can use this. Read the rest of this entry »


An AC hierarchy based on subspace circuits

February 12, 2009

[This post was created from latex using a script written by Luca Trevisan — many thanks!]

In this post I want to continue the series of posts (see here and here and here) on quantum boolean functions, quantum disjunctive normal form, and (hopefully) lower bounds for classes of quantum circuits.

Here I want to introduce a class of languages (which is computer science speak for “class of problems”) which can be decided by quantum computers capable of measuring quantum boolean functions {f} defined by special circuits (not quantum circuits) which I’ll provisionally call subspace circuits. Using these special circuits I’ll define an {\mathbf{AC}} hierarchy of languages. Since {\mathbf{QAC^j}} is already taken (see this paper for the definition) we’ll refer to our resulting {\mathbf{AC}}-like hierarchy as subspace-{\mathbf{AC}}. Read the rest of this entry »


Visit to to the AMOPP group at UCL

February 11, 2009

Today I was invited to give a talk to the AMOPP group at UCL. While I was visiting I was given a lab tour of the LCN, and I got to see all their fancy stainless steel lab equipment including big magnets and other stuff to move and measure small cold things.  During the visit I heard about the latest stuff they are doing there, including some very exciting implementations of electrical spin-trap readout of coherence in silicon.

I heard from the group of Sougato Bose, and I learnt about some counterintuitive results showing that after a quench of interactions in the XXZ chain there can be a substantial generation of entanglement from an initial thermal state. I also heard about the some other results about entanglement generation after a quench, but this time where only one or two interactions are quenched. Both of these papers are intimately linked with how quasiparticles propagate through interacting quantum spin systems and, by focussing on entanglement generation, expose subtle transport physics in these models. Eg. check out figure 4 in this paper where some kind of interesting transition (probably not a quantum phase transition) is occurring at \Delta = -0.5.

The slides for the talk I gave are here. I spoke about the some of the known results concerning the computational complexity of simulating interacting quantum systems, in particular, quantum spin systems.