I’ve recently been pondering this picture

(Yes, I used comic sans. For a reason.)

An open science weblog focussed on quantum information theory, condensed matter physics, and mathematical physics

“It’s better to pursue one bad idea to its logical conclusion than it is to start and not finish ten good ones,” Michael said.

I was sitting in Michael Nielsen’s office at The University of Queensland: it was early 2002 — a steamy Brisbane summer afternoon — and the air conditioner struggled to cool the room. I had just finished with a long despairing complaint about the disappointing lack of progress I’d been making on my PhD when he issued me with his advice. (I was beginning the third and final year of my PhD.)

I’d had an interesting ride so far: I began my PhD in the year 2000 in applied mathematics studying free-surface problems in fluid mechanics. Fluid dynamics is a challenging and mature research area and requires a lot of effort to get up to speed. Unfortunately, I am very lazy and it had taken me a very long time. Also, I quickly found out that I just wasn’t that interested in the motion of fluids (although, one of the papers I’m proudest of emerged from this period). I quickly became unmotivated and I had begun to distract myself by reading quantum field theory textbooks to procrastinate instead of finding that sneaky bug in my code…

Then everything changed. I think it was in late 2001 when Michael arrived at UQ and gave a series of talks on quantum computers. I was hooked and I immediately dropped everything and started working with Michael “in my free time” on quantum entanglement and condensed matter systems.

I once heard a definition of a golden age in science as a period when mediocre scientists could make great contributions. (I forget when and where I heard this and a cursory google search didn’t turn up anything.) The early 2000s were definitely a golden age for quantum information theory and I had the greatest luck to work with one of its architects. In practically no time whatsoever (in comparison with applied mathematics) we’d written a couple of papers on entanglement in quantum phase transitions.

It had been just so effortless. Now I’d finally found a research field that appealed to me: with an absolute minimum of effort one could write a paper that’d be read by more than two people. Wow! (Alas, this is no longer true…)

All this went to my head. I figured that if one could just stick two buzzwords together (entanglement and quantum phase transitions) and get a paper then why not do it again? I was skimming through texts on partial differential equations, algebraic topology, and stochastic calculus and seeing connections EVERYWHERE! I was discovering “deep” connections between entanglement and homotopy theory before breathlessly diving into an idea for a quantum algorithm to solve PDEs. I would spiral into hypnotic trances, staring distractedly into space while one amazing idea after the other flowed through mind. (This is the closest I ever got to the flow state so beloved of hackers.)

But at the same time frustration, edged with desperation, was growing. I was having all these amazing ideas but, somehow, when I started writing one of them down it started to seem just *sooooo* boring and I promptly had a better one. My hard drive filled with unfinished papers. I had less than a year until my money was gone and no new papers!

I was lost in the dark playground:

I then went to Michael and told him of my frustration. And it was this complaint that had prompted him to give me his advice. All at once, it was clear to me what I’d been doing wrong. So I threw my energies into a problem Micheal suggested might be interesting: proving the general Coffman-Kundu-Wootters inequality. This was a hugely satisfying time; although I didn’t end up proving the inequality during my PhD I managed to, mostly by myself, work out a generalisation of a formula for a mixed-state entanglement measure that I was convinced would be essential for a proof (this sort of thing was a big deal in those days, I guess not anymore). Every day I was tempted by new and more interesting ideas, but I now knew them for the temptation of procrastination that they were.

Michael’s advice has stuck with me ever since and has become one of my most cherished principles. These days I’m often heard giving the same advice to people suffering from the same temptation of the “better idea”.

Now “focus on one idea” is all very well, but which idea should you focus on? (You will have no doubt noticed that I was rather lucky Michael had the perspective to suggest what was actually a rather good one.) What do we do if we have lots and lots of good ideas, each one of them clamoring for attention? How do we break the symmetry? How can we best choose just one or two ideas to focus on? How should you split your most precious resource, your time, while balancing the riskiness of an idea against its potential return?

Ultimately I do not have an answer, but I do have a decision tool that can help you to make your mind up. The idea is to regard research ideas as investments, i.e. *assets*, and to evaluate their potential *return* and their *risk*. In this language we have reduced the problem to that of investing some capital, *your time*, amongst several assets. This is an old problem in portfolio management and there is a very nice tool to help you work out the best strategy: the risk-return plane. The idea is pretty simple. In the case of portfolio management you have some capital you want to split amongst a portfolio of assets which are characterised by two numbers, their average return and their risk, i.e., the standard deviation of their return. Take a two-dimensional plane and label the x-axis with the word “risk” and the y-axis with the word “return”. Each asset is plotted as a point on the risk-return plane:

Now something should be obvious: you should never invest in an asset with the same return but higher risk, nor should you ever invest in an asset with the same risk but lower return. This picks out a thin set of assets living on the “boundary” of (basically the convex hull of) all the assets, called the *efficient frontier*. You should only ever invest in assets on the efficient frontier.

For a joke I once suggested using the risk-return plane to work out what research idea you should work on. However, it quickly became apparent that some people found it a useful tool. Here’s one way to do things: first write down all your research ideas. Then, after some honest reflection on what you think the most likely outcome of a successful result from the project would be, associate a “return” to each idea. (Just ask yourself: if everything worked out how happy would you be? How much would the completed idea contribute to science? Insert your own metric here.) The way I did this was, somewhat flippantly, to label each idea with a journal that I thought would accept the idea. Thus I created a list:

- Journal of publish anything
- Physical Review
- New Journal of Physics
- Physical Review Letters
- Science, Nature, Annals of Mathematics, etc.

It is totally silly but it has just sort of stuck since then. Next, you have to assess the risk of each project. I think a reasonable way to do this is to quantify each research idea according to what you think is required to solve it, e.g., according to

- Trivial calculation
- Longer calculation
- Some missing steps
- Needs a new technique
- I don’t know anything of what’s required

For an example let’s just take a look at my top twelve research ideas for the last year:

- Chern-Weil theory applied to classification of quantum phases via quasi-adiabatic continuation.
- Reaction kinetics for ultracold chemistry.
- Continuous limits for quantum lattice systems.
- Tensor networks for lattice gauge theory.
- The scattering problem for local excitations in lattice systems.
- Prove the quantum PCP conjecture.
- Improve the gap for adiabatic quantum computation.
- K-theory for the MREGS problem.
- Classify topological order in higher dimensions.
- A compact formula for the distillable entanglement of two qubits.
- Calculate the entanglement of the 2-rotor rotor model.
- Prove the quantum version of the KKL inequality.

Here’s my risk-return plane:

Looking at the results it quickly became apparent that I shouldn’t really invest my energy in a formula for the 2-qubit distillable entanglement (shame! I would be interested in solving that one, but I just can’t see how it would be useful to anyone, including to myself!!!) Also, I should steer clear of the quantum KKL inequality, the quantum PCP conjecture, and K-theory for MREGS.

Note that all of this is *completely* and *utterly* *subjective*! You might well argue that a proof of the quantum PCP conjecture would be a tremendously impactful result on par with the quantisation of quantum gravity. But my purely subjective assessment at the time was that it would not be of the same level of impact (for me) as, say, classifying topological order in all dimensions.

Thus, out of the top 12 ideas floating around my head only 5 remained. This is still way too many! To winnow down the list it is helpful to use an investment strategy employed in portfolio management which is, roughly speaking, to invest a more of your capital in less risky assets than riskier assets (i.e., don’t put all your eggs in one risky basket!!!!) Thus I dropped the riskiest idea and I also dropped the most trivial one as not really giving much personal reward. I was left with three ideas. This was just about enough for me, and I spent most of my available energies on those.

I find it helpful to keep this risk return plane around and to periodically update it as I get more information, or when I get new ideas. Looking at it today I figure I’ll move the adiabatic gap, Chern-Weil theory, and the scattering problem ideas up a bit. Maybe I’ll work on them soon…

There are several stories of great discoveries been made in dreams (see this wonderful wikipedia list for some famous ones).

Unfortunately I have never had the good luck to have a dream which gave me a creative insight to solve a problem. That isn’t to say that I don’t dream about my research. Last night, for instance, I dreamt of classifying two-dimensional quantum phases using a quantum generalisation of the j-invariant and some other invariant which I, in my dream, for some reason wrote as . When I woke up I quickly checked whether any sense could be made of this. As usual, far as I could see, it is total nonsense. Sigh.

I’ve also been rather envious of those who seem to be able to exploit unconscious cognition. When I was doing my PhD, I was mightily impressed by Michael Nielsen who would sometimes pause in the middle of a conversation and exclaim “I now know how to solve problem x!”. I mean, how cool is that!? Alas, it never worked for me. Oh, I do get “aha” moments rather often, but the result is usually complete junk…

I only get results after hard slog. I have to make lots and lots and lots of mistakes and only then, slowly and gradually, the result emerges, reluctantly and complaining all the while, in its final form.

I can’t remember ever really experiencing a dream or an “aha” moment that turned out to be really correct.

Waterloo, Canada. It was early 2004. It was cold.

I was cocky and full of the naive self-satisfied arrogance of a young physicist convinced of his powers. Filled with dreams of glory I made my way onto the airport shuttle to return home to Bristol. I was returning from QIP 2004.

It had been an exciting week.

I had just enjoyed a string of inspiring talks on a variety of fascinating topics across the full spectrum of quantum information theory. There was a palpable buzz in the air: quantum computers were coming! We were all positioning ourselves for the game of the millennium. As far as I was concerned, the only game in town was quantum algorithms, and fame and fortune would be awarded to those lucky few who could find a quantum algorithm yielding the holy grail of the *exponential speedup*.

My mind was a ferment of ideas; I’d just seen a fabulous talk by Oded Regev on quantum complexity, cryptography, and lattice point problems. I was convinced this was where quantum computers would exhibit a speedup. Oded had presented a truly lovely result showing that if you could prepare a very special kind of quantum state, a superposition of gaussians centred on the points of an (unknown) lattice in , you could decide the shortest/closest vector in a lattice problem in polynomial time (in *n*). Wow! That would be a demonstrable exponential speedup over the best classical algorithm! What’s more, I had an “in”: I knew some quantum optics and I could see that the natural language to discuss this problem would be that of continuous quantum variables. It would be so simple: I’d find a quantum-optical circuit that naturally prepared this state, discretise it, and solve the problem. Along the way I’d introduce a totally new paradigm of continuous quantum algorithms!!

Thus began the worst year of my life.

The year 2003 had already been very difficult for me for a variety of reasons. I’d started my first two-year postdoc in 2003 and it had taken me a long long time to settle down. I had found moving countries to be so much more difficult than I imagined. This meant that my productivity had basically fallen to zero for a year (everything you see from me on the arXiv during this period, with one exception, are papers that had been developed during my PhD.) So there I was, at the beginning of 2004, resolved to write the *big one*: a paper that would one day become as famous as Shor’s algorithm. I figured they’d call it Osborne’s lattice algorithm. Of course, when giving a talk, I already knew I’d always modestly refer to it as “the quantum lattice point algorithm”. Awesome.

I worked tirelessly. I spent hours and hours in a row, day after day, completely focussed on this one problem. I dreamt about it.

To solve it, I deployed the full arsenal of my quantum optical tricks to annihilate this problem. And at the beginning it looked really promising. I had the basic idea clear (use a phase kickback trick and continuous-variable teleportation to create the superposition, then correct the phases afterward).

Slowly, but surely, a disturbing pattern emerged. I’d begin really inspired, convinced that I was finally on the right track. Then I’d carefully write up my latest version of the algorithm. Then I’d find the mistake, which always turned out to be that I’d simply *misunderstood* some aspect of the problem. Then I was right back at square one. There was simply *nothing* to be recovered because every damn time the mistake was that I’d misunderstood the problem and tried to solve the wrong thing. This ecstasy/agony cycle took, variously, one week to one month each time.

This lasted over 6 months. I was becoming exhausted. I didn’t work on anything else. My personal life was in a shambles.

But I persisted. Surely this time I would do it. Giving up was for losers.

Instead of giving up I doubled down and enlisted the help of Nick Jones, a PhD student at Bristol at the time. I am filled with admiration for Nick who, with limitless patience, worked with me at the board hour after hour on this **wretched** problem. But it was, alas, to no avail.

Finally, humiliated and defeated, I gave up. I think it was October 2004.

This was the worst feeling: nearly a whole year gone with absolutely *nothing* to show for it. Worse, I was wracked with guilt, feeling I’d totally wasted Nick’s time.

…

Things soon got better. By a miracle my postdoc had been extended for a while, so at least I wasn’t on the job market straight away. Secondly, at the end of 2004 I went to a truly inspiring conference at the Isaac Newton institute where I met up with Guifre Vidal who showed me something amazing: the Multiscale Entanglement Renormalisation Ansatz, and I realised that what I should do is focus on more my core skill set (quantum entanglement and many body quantum spin systems). I began working on tensor networks, read a fantastic paper of Hastings, and got into the Lieb-Robinson game.

If I had my time again what would I do differently? I don’t regret working on this problem. It was worth a try. My mistake was to *keep* working on it, to the exclusion of everything else, for too long. These days I am mindful of the advice of Wheeler: you should never do a calculation until you already know the answer. I also try to keep a portfolio of problems on the go, some risky ones, and some safe ones. (More on that in a future post.) Finally, upon reflection, I think my motivation for working on this problem was totally wrong. I was primarily interested in solving a famous problem and becoming famous rather than the problem itself. In the past decade I’ve learnt to be very critical of this kind of motivation, as I’ve seldom found it successful.

PS. I find the QIP 2004 conference photo rather striking: so many of the attendees have moved on to great success in quantum information theory and have now become household names. It was still a year before Scott Aaronson would start his famous blog. I never would have guessed at the time.

A common complaint I’ve heard from my colleagues is that their favourite and best work is their least appreciated and least cited work. (I certainly feel this way.) It is not hard to imagine why this is: probably their best work is the one that contains the most unfamiliar and original ideas, the most difficult calculations, and is probably the least clear to anyone except the author because it is very difficult to explain something truly new.

(This all rather puts me in mind of the quote:

Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats.

Howard H. Aiken, as quoted in Portraits in Silicon (1987) by Robert Slater

)

So, if you are looking for something truly interesting to work on why not pick an author you respect and find their oldest least cited paper. (I.e., don’t choose a new one that simply hasn’t been read yet.) Read it forwards and backwards until you completely understand it.

There will surely be some treasure buried in there.

And hey, you can be pretty sure that absolutely noone else will be working on the same thing.

(My personal pick is the intriguing paper of Bill Wootters on entanglement and parallel transport.)

Quantum information theory has evolved in fascinating ways over the past two decades or so and I’ve been privileged to directly witness its development for ten of these years. In this post, I thought I’d have a go at predicting where it will go, and what the “next big thing” for quantum information theory will be.

Around the year 2000 quantum information theory seemed to be primarily focussed on two broad themes: building a quantum computer and developing quantum algorithms for it, and building a resource theory for quantum information via, e.g., quantum entanglement theory and quantum Shannon theory. To a large extent both of these themes continue strongly today. Although, I’d suggest that quantum Shannon theory has fared much better than the theory of quantum entanglement, in particular, that of entanglement measures, which seemed really important a decade ago but not so much now.

One thing that would have been harder to predict was the influence of quantum information theory on other areas of physics. For example, QI has now had some considerable impact in condensed matter physics, particularly with regard to the development of new classical simulation algorithms for complex quantum systems. From my considerably biased perspective I think that this second-order effect has been rather important. Also, there has been excitement about the role and influence of QI on biological physics.

So now to the question: what next for quantum information? I based the following list on topics that I personally find very interesting, and also on observations I’ve made about external pressures coming from funding agencies and from the job market.

**1. Quantum computers **

I firmly believe a quantum computer will be built, although I refuse to say how long this will take. One thing that I think may happen is the emphasis on fault tolerance thresholds in choosing a quantum computer architecture will diminish slightly as experimentalists engineer systems capable of supporting quantum coherence on longer timescales. I’m sure that cluster states will be exploited in some way in the successful quantum computer architectures. I also feel sure that as we get access to such systems this will spark our creativity in designing nontrivial things to do, i.e., in developing quantum algorithms using *dissipative quantum processes*.

**2. Quantum algorithms **

Thus I feel convinced that quantum algorithms development will continue, albeit slowly. One area which hasn’t received much attention — probably because it isn’t as glamourous as an exponential speedup — but which *really* should, is the development of quantum algorithms which give *polynomial* speedups for problems in P. These kind of speedups could turn out to be extremely important: if the best classical algorithm for a problem of *major practical importance* uses, say, operations, and you found a quantum algorithm using operations this would have major *practical* implications. I do hope that such speedups will become an area of more intense research and I feel relatively confident this area could take off. Unfortunately I don’t know enough about classical algorithms to give a firm prediction for which kinds of problems will be amenable to such quantum speedups (sorry!).

As mentioned above, another class of quantum algorithms which has been so far relatively unexplored, is that of dissipative quantum algorithms. (There are some exceptions here, see, e.g., and this, this, and, somewhat immodestly, this.) Such algorithms are extremely important because they give intermediate experimental implementations something to run!

**3. Complex quantum systems **

Quantum information will continue to play a role in the study of complex quantum systems. This is an easy prediction: QI trained people are generally quite good at thinking about quantum coherence, which plays a major role in the physics of strongly interacting quantum systems. I feel relatively confident in predicting that the physics of 2D and, to some extent, 3D lattice systems, will see major QI-inspired developments.

Another area which I am very enthusiastic about is that of quantum systems with *continuous* degrees of freedom, particularly, *quantum fields*. Lattice systems are, after all, an approximation to these systems, and it is clear that existing QI-inspired techniques will have some influence here (indeed, this is just beginning with the extension of MPS and MERA to the continuous setting). Additionally, if a good enough interplay can be developed then this would allow quantum field theorists to be able contribute to quantum information-type problems. Also, holographic correspondences such as the AdS/CFT correspondence have QI aspects, so we might see QI theorists and string theorists working together more strongly here.

**4. Classical physics **

My final prediction concerns the influence of QI on *classical physics*. The thing is, QI trained people are not only good at thinking about quantum coherence, but also about *correlations* in general (see, e.g., the continuing developments in the study of Bell’s inequalities, cryptography based on no-signalling, etc.). Correlations are always hard to think about, but the thing we’ve learnt in studying QI in the context of condensed matter is that if you have a way to think about correlations in a better way then this can lead to new simulation algorithms. Here I have in mind, for example, the study of fluid dynamics, as applied to the climate (see this for a longer discussion), and other problems of classical many body physics such as traffic flow via this, community detection, and image recognition. The nice thing about these areas is that they are much more directly connected with our everyday life. Any contribution here would have a much more direct impact on important problems facing humanity.

What do you think?