I’ve recently been pondering this picture
(Yes, I used comic sans. For a reason.)
“It’s better to pursue one bad idea to its logical conclusion than it is to start and not finish ten good ones,” Michael said.
I was sitting in Michael Nielsen’s office at The University of Queensland: it was early 2002 — a steamy Brisbane summer afternoon — and the air conditioner struggled to cool the room. I had just finished with a long despairing complaint about the disappointing lack of progress I’d been making on my PhD when he issued me with his advice. (I was beginning the third and final year of my PhD.)
I’d had an interesting ride so far: I began my PhD in the year 2000 in applied mathematics studying free-surface problems in fluid mechanics. Fluid dynamics is a challenging and mature research area and requires a lot of effort to get up to speed. Unfortunately, I am very lazy and it had taken me a very long time. Also, I quickly found out that I just wasn’t that interested in the motion of fluids (although, one of the papers I’m proudest of emerged from this period). I quickly became unmotivated and I had begun to distract myself by reading quantum field theory textbooks to procrastinate instead of finding that sneaky bug in my code…
Then everything changed. I think it was in late 2001 when Michael arrived at UQ and gave a series of talks on quantum computers. I was hooked and I immediately dropped everything and started working with Michael “in my free time” on quantum entanglement and condensed matter systems.
I once heard a definition of a golden age in science as a period when mediocre scientists could make great contributions. (I forget when and where I heard this and a cursory google search didn’t turn up anything.) The early 2000s were definitely a golden age for quantum information theory and I had the greatest luck to work with one of its architects. In practically no time whatsoever (in comparison with applied mathematics) we’d written a couple of papers on entanglement in quantum phase transitions.
It had been just so effortless. Now I’d finally found a research field that appealed to me: with an absolute minimum of effort one could write a paper that’d be read by more than two people. Wow! (Alas, this is no longer true…)
All this went to my head. I figured that if one could just stick two buzzwords together (entanglement and quantum phase transitions) and get a paper then why not do it again? I was skimming through texts on partial differential equations, algebraic topology, and stochastic calculus and seeing connections EVERYWHERE! I was discovering “deep” connections between entanglement and homotopy theory before breathlessly diving into an idea for a quantum algorithm to solve PDEs. I would spiral into hypnotic trances, staring distractedly into space while one amazing idea after the other flowed through mind. (This is the closest I ever got to the flow state so beloved of hackers.)
But at the same time frustration, edged with desperation, was growing. I was having all these amazing ideas but, somehow, when I started writing one of them down it started to seem just sooooo boring and I promptly had a better one. My hard drive filled with unfinished papers. I had less than a year until my money was gone and no new papers!
I was lost in the dark playground:
I then went to Michael and told him of my frustration. And it was this complaint that had prompted him to give me his advice. All at once, it was clear to me what I’d been doing wrong. So I threw my energies into a problem Micheal suggested might be interesting: proving the general Coffman-Kundu-Wootters inequality. This was a hugely satisfying time; although I didn’t end up proving the inequality during my PhD I managed to, mostly by myself, work out a generalisation of a formula for a mixed-state entanglement measure that I was convinced would be essential for a proof (this sort of thing was a big deal in those days, I guess not anymore). Every day I was tempted by new and more interesting ideas, but I now knew them for the temptation of procrastination that they were.
Michael’s advice has stuck with me ever since and has become one of my most cherished principles. These days I’m often heard giving the same advice to people suffering from the same temptation of the “better idea”.
Now “focus on one idea” is all very well, but which idea should you focus on? (You will have no doubt noticed that I was rather lucky Michael had the perspective to suggest what was actually a rather good one.) What do we do if we have lots and lots of good ideas, each one of them clamoring for attention? How do we break the symmetry? How can we best choose just one or two ideas to focus on? How should you split your most precious resource, your time, while balancing the riskiness of an idea against its potential return?
Ultimately I do not have an answer, but I do have a decision tool that can help you to make your mind up. The idea is to regard research ideas as investments, i.e. assets, and to evaluate their potential return and their risk. In this language we have reduced the problem to that of investing some capital, your time, amongst several assets. This is an old problem in portfolio management and there is a very nice tool to help you work out the best strategy: the risk-return plane. The idea is pretty simple. In the case of portfolio management you have some capital you want to split amongst a portfolio of assets which are characterised by two numbers, their average return and their risk, i.e., the standard deviation of their return. Take a two-dimensional plane and label the x-axis with the word “risk” and the y-axis with the word “return”. Each asset is plotted as a point on the risk-return plane:
Now something should be obvious: you should never invest in an asset with the same return but higher risk, nor should you ever invest in an asset with the same risk but lower return. This picks out a thin set of assets living on the “boundary” of (basically the convex hull of) all the assets, called the efficient frontier. You should only ever invest in assets on the efficient frontier.
For a joke I once suggested using the risk-return plane to work out what research idea you should work on. However, it quickly became apparent that some people found it a useful tool. Here’s one way to do things: first write down all your research ideas. Then, after some honest reflection on what you think the most likely outcome of a successful result from the project would be, associate a “return” to each idea. (Just ask yourself: if everything worked out how happy would you be? How much would the completed idea contribute to science? Insert your own metric here.) The way I did this was, somewhat flippantly, to label each idea with a journal that I thought would accept the idea. Thus I created a list:
It is totally silly but it has just sort of stuck since then. Next, you have to assess the risk of each project. I think a reasonable way to do this is to quantify each research idea according to what you think is required to solve it, e.g., according to
For an example let’s just take a look at my top twelve research ideas for the last year:
Here’s my risk-return plane:
Looking at the results it quickly became apparent that I shouldn’t really invest my energy in a formula for the 2-qubit distillable entanglement (shame! I would be interested in solving that one, but I just can’t see how it would be useful to anyone, including to myself!!!) Also, I should steer clear of the quantum KKL inequality, the quantum PCP conjecture, and K-theory for MREGS.
Note that all of this is completely and utterly subjective! You might well argue that a proof of the quantum PCP conjecture would be a tremendously impactful result on par with the quantisation of quantum gravity. But my purely subjective assessment at the time was that it would not be of the same level of impact (for me) as, say, classifying topological order in all dimensions.
Thus, out of the top 12 ideas floating around my head only 5 remained. This is still way too many! To winnow down the list it is helpful to use an investment strategy employed in portfolio management which is, roughly speaking, to invest a more of your capital in less risky assets than riskier assets (i.e., don’t put all your eggs in one risky basket!!!!) Thus I dropped the riskiest idea and I also dropped the most trivial one as not really giving much personal reward. I was left with three ideas. This was just about enough for me, and I spent most of my available energies on those.
I find it helpful to keep this risk return plane around and to periodically update it as I get more information, or when I get new ideas. Looking at it today I figure I’ll move the adiabatic gap, Chern-Weil theory, and the scattering problem ideas up a bit. Maybe I’ll work on them soon…
In this post I’d like to begin to explore what is meant by the continuum limit of a quantum lattice system. This post is meant to serve as the first in a series of intuitive overviews of the ideas involved in the open science project “continuous-limits-of-quantum-lattice-systems” hosted on github.
The continuous limit is a power tool in the condensed-matter theorist’s toolkit: by identifying the appropriate effective field theory modelling the low-energy large-scale physics of a complex quantum system one can bring the fully developed apparatus of (perturbative) field theory and the renormalisation group to bear on a problem, often delivering results unavailable via any other means.
Now I’m pretty sure I’m not alone in feeling confused by much of the available physical literature on this topic. Over the past decade I’ve tried to understand the process whereby a field theory is produced to describe a given quantum lattice system. However, up until recently, this has always seemed like a kind of mysterious black magic to me. I know it has to do with symmetries etc. etc.. But this didn’t really help me! I had so many questions. E.g., how exactly does the state of the effective field theory relate to that of the original lattice system? And, for that matter, how do you know what quantities are “fieldlike” and which don’t admit a field-theoretic representation? That is, what has most puzzled me is the quantitative side of things: ideally what I would like is some kind of map which associates, one to one, lattice quantities with field quantities in an operationally transparent way.
Thus I was very excited when I discovered that there is indeed such a map and, further, is naturally associated with the quantum de Finetti theorem. Here I’d like to explain the idea behind this construction using the quantum information theoretic language of exchangeable states.
There are several stories of great discoveries been made in dreams (see this wonderful wikipedia list for some famous ones).
Unfortunately I have never had the good luck to have a dream which gave me a creative insight to solve a problem. That isn’t to say that I don’t dream about my research. Last night, for instance, I dreamt of classifying two-dimensional quantum phases using a quantum generalisation of the j-invariant and some other invariant which I, in my dream, for some reason wrote as . When I woke up I quickly checked whether any sense could be made of this. As usual, far as I could see, it is total nonsense. Sigh.
I’ve also been rather envious of those who seem to be able to exploit unconscious cognition. When I was doing my PhD, I was mightily impressed by Michael Nielsen who would sometimes pause in the middle of a conversation and exclaim “I now know how to solve problem x!”. I mean, how cool is that!? Alas, it never worked for me. Oh, I do get “aha” moments rather often, but the result is usually complete junk…
I only get results after hard slog. I have to make lots and lots and lots of mistakes and only then, slowly and gradually, the result emerges, reluctantly and complaining all the while, in its final form.
I can’t remember ever really experiencing a dream or an “aha” moment that turned out to be really correct.
Waterloo, Canada. It was early 2004. It was cold.
I was cocky and full of the naive self-satisfied arrogance of a young physicist convinced of his powers. Filled with dreams of glory I made my way onto the airport shuttle to return home to Bristol. I was returning from QIP 2004.
It had been an exciting week.
I had just enjoyed a string of inspiring talks on a variety of fascinating topics across the full spectrum of quantum information theory. There was a palpable buzz in the air: quantum computers were coming! We were all positioning ourselves for the game of the millennium. As far as I was concerned, the only game in town was quantum algorithms, and fame and fortune would be awarded to those lucky few who could find a quantum algorithm yielding the holy grail of the exponential speedup.
My mind was a ferment of ideas; I’d just seen a fabulous talk by Oded Regev on quantum complexity, cryptography, and lattice point problems. I was convinced this was where quantum computers would exhibit a speedup. Oded had presented a truly lovely result showing that if you could prepare a very special kind of quantum state, a superposition of gaussians centred on the points of an (unknown) lattice in , you could decide the shortest/closest vector in a lattice problem in polynomial time (in n). Wow! That would be a demonstrable exponential speedup over the best classical algorithm! What’s more, I had an “in”: I knew some quantum optics and I could see that the natural language to discuss this problem would be that of continuous quantum variables. It would be so simple: I’d find a quantum-optical circuit that naturally prepared this state, discretise it, and solve the problem. Along the way I’d introduce a totally new paradigm of continuous quantum algorithms!!
Thus began the worst year of my life.
The year 2003 had already been very difficult for me for a variety of reasons. I’d started my first two-year postdoc in 2003 and it had taken me a long long time to settle down. I had found moving countries to be so much more difficult than I imagined. This meant that my productivity had basically fallen to zero for a year (everything you see from me on the arXiv during this period, with one exception, are papers that had been developed during my PhD.) So there I was, at the beginning of 2004, resolved to write the big one: a paper that would one day become as famous as Shor’s algorithm. I figured they’d call it Osborne’s lattice algorithm. Of course, when giving a talk, I already knew I’d always modestly refer to it as “the quantum lattice point algorithm”. Awesome.
I worked tirelessly. I spent hours and hours in a row, day after day, completely focussed on this one problem. I dreamt about it.
To solve it, I deployed the full arsenal of my quantum optical tricks to annihilate this problem. And at the beginning it looked really promising. I had the basic idea clear (use a phase kickback trick and continuous-variable teleportation to create the superposition, then correct the phases afterward).
Slowly, but surely, a disturbing pattern emerged. I’d begin really inspired, convinced that I was finally on the right track. Then I’d carefully write up my latest version of the algorithm. Then I’d find the mistake, which always turned out to be that I’d simply misunderstood some aspect of the problem. Then I was right back at square one. There was simply nothing to be recovered because every damn time the mistake was that I’d misunderstood the problem and tried to solve the wrong thing. This ecstasy/agony cycle took, variously, one week to one month each time.
This lasted over 6 months. I was becoming exhausted. I didn’t work on anything else. My personal life was in a shambles.
But I persisted. Surely this time I would do it. Giving up was for losers.
Instead of giving up I doubled down and enlisted the help of Nick Jones, a PhD student at Bristol at the time. I am filled with admiration for Nick who, with limitless patience, worked with me at the board hour after hour on this wretched problem. But it was, alas, to no avail.
Finally, humiliated and defeated, I gave up. I think it was October 2004.
This was the worst feeling: nearly a whole year gone with absolutely nothing to show for it. Worse, I was wracked with guilt, feeling I’d totally wasted Nick’s time.
Things soon got better. By a miracle my postdoc had been extended for a while, so at least I wasn’t on the job market straight away. Secondly, at the end of 2004 I went to a truly inspiring conference at the Isaac Newton institute where I met up with Guifre Vidal who showed me something amazing: the Multiscale Entanglement Renormalisation Ansatz, and I realised that what I should do is focus on more my core skill set (quantum entanglement and many body quantum spin systems). I began working on tensor networks, read a fantastic paper of Hastings, and got into the Lieb-Robinson game.
If I had my time again what would I do differently? I don’t regret working on this problem. It was worth a try. My mistake was to keep working on it, to the exclusion of everything else, for too long. These days I am mindful of the advice of Wheeler: you should never do a calculation until you already know the answer. I also try to keep a portfolio of problems on the go, some risky ones, and some safe ones. (More on that in a future post.) Finally, upon reflection, I think my motivation for working on this problem was totally wrong. I was primarily interested in solving a famous problem and becoming famous rather than the problem itself. In the past decade I’ve learnt to be very critical of this kind of motivation, as I’ve seldom found it successful.
PS. I find the QIP 2004 conference photo rather striking: so many of the attendees have moved on to great success in quantum information theory and have now become household names. It was still a year before Scott Aaronson would start his famous blog. I never would have guessed at the time.
In my two previous posts I discussed a quantum-information inspired approach to the study of quantum field theory.
So far everything I’ve talked about applies only to standard bosonic quantum fields. However, there are, apart from fermionic quantum fields, another type of quantum field which requires a more careful approach, namely, gauge fields. Read the rest of this entry »
As I mentioned in my previous post, I have been working for some five years on trying to understand quantum field theory from a quantum-information perspective. This has finally come to a fruition of sorts: I’m pretty sure I have an operationally motivated way to build nontrivial nongaussian quantum field states using a variety of tensor network states.
The input to the procedure is any family of tensor network states (or, indeed, any family of states) whose correlation functions diverge in a controllable way as a function of a scale parameter. The procedure then produces a continuum limit with the corresponding quantum field data modelling the quantum fluctuations around the limit.
There are two main ideas behind the procedure: (1) it begins by extending the mean-field formalism of Hepp and Lieb (developed later by Verbeure and coworkers) to identify the emergent continuous large-scale degrees of freedom describing the classical bulk fluctuations (remarkably the continuous degrees of freedom are not prescribed beforehand) — this is a kind of generalised law of large numbers result; and then (2) by exploiting a generalised quantum central limit-theorem type argument the quantum fluctuations around the bulk are then identified and the emergent quantum field operators are subsequently identified. The applicability of this procedure is contingent on the family of input states satisfying certain criteria, which essentially boils down to the ability to tune the correlation length in a controlled way.
A nontrivial result is that several tensor network states naturally satisfy the criteria required by the continuum limit procedure: in particular, for the continuum limit of the matrix product state and projected entangled-pair state classes we recover their recently introduced continuous counterparts and for tree tensor network classes arising from Kadanoff block spin renormalisation and the multi-scale renormalisation ansatz class we obtain continuum descriptions generalising the recently introduced continuous MERA.
For me the most exciting discovery in all of this is that there are simply an enormous number of non-gaussian states which can serve as fixed points of Wilson’s RG and give rise to very reasonable renormalisable QFTs.
I gave up on open science a while ago (see this post for details). However, I’ve always wanted to give it another try.
The open-source software (OSS) movement is often held up as a model for how open science should work and it occurred to me recently we could exploit a powerful tool used in OSS to facilitate scientific collaborations, namely, github. Thus today I’d like to announce a new github-based open-science project based on the aforementioned continuum limit construction: I’ve created a github repository for this project and uploaded the latex source of a paper I’ve been working on for some time describing this construction. It is my hope that this initial incomplete draft could serve as the basis for a collaborative project on understanding how to implement Wilsonian renormalisation for tensor network states. Read the rest of this entry »