Is quantum mechanics an effective theory?

May 16, 2012

What is the ultimate theory governing Nature? All evidence to date strongly suggests that quantum mechanics (QM) is this theory. However, it could still be that QM is simply a really good effective theory which breaks down if we are able to perform experiments with sufficiently high energy and precision. If this is the case, what sort of “post-quantum theory” could QM be replaced with? Assuming only that special relativity is correct, one can postulate “generalised probabilistic theories” (GPTs) as the framework to explore such alternatives.

What is a GPT? To define it one only imposes a handful of axioms, which are introduced in a paper of Barrett, based on previous work by Hardy. They are:

Assumption 1. The state of a single system can be completely specified by listing the probabilities for the outcomes of some subset F of all possible measurements. These are the fiducial measurements. These probabilities can be written arranged in a vector P.

Assumption 2. For each type of system, the set of allowed normalized states is closed and convex. The complete set of states S is the convex hull of the set of allowed normalized states and 0.

Assumption 3. For each type of system, there is a set T of allowed transformations. A set of transformations T mapping S to itself. The set T includes the transformation that maps all states P to 0 and is convex.

Assumption 4 (Local Operations Commute). Consider a joint system composed of systems A and B. Suppose that an operation is performed on system A alone with outcome OA and an operation on system B alone with outcome OB. The final unnormalized state of the joint system does not depend on the order in which the operations were performed. In particular, this implies that the joint probability of getting outcomes OA and OB does not depend on the ordering of the operations.

These aren’t the only sets of axioms one could use, and there has been plenty of work tinkering with them. However, for convenience, we take these as our working definition of a GPT. One of the consequences of these assumptions is that the set of possible states, S, and the set of measurement effects, F, are dual convex bodies. Thus, basically, a GPT is completely specified by the assumptions after specifying either S or F. We choose to work with F.

GPTs have received considerable attention recently, both as a foil to better understand the features of QM, and as a powerful abstract way to reason about correlations and relativity. These investigations have lead to many interesting results, including simplified and improved cryptographic schemes and primitives. An interesting consequence of a GPT beyond QM is that they yield violations of CHSH inequalities beyond those possible in QM, so they are basically ruled out by experiment.

But, let’s suppose for the moment that Nature isn’t described by QM, and rather by some other GPT. A natural question then arises: why is QM such a good effective theory? A natural answer, which we investigate in a recent preprint, is that experimental imperfections prevent us from observing any post-quantum phenomena.

Suppose that Nature is described by a GPT with a high-dimensional state space and corresponding high-dimensional set of all possible measurements. Observational limitations, such as detector resolution, mean that it is impossible to access most of these theoretically possible measurements. If physically implementable measurements are those chosen from some typical subset (a precise definition is given our paper) then we show that the bipartite correlations arising in any experiment can be modelled, to a high degree of precision, by those of QM. Note that the tripartite and multipartite correlations could go beyond those exhibited by QM: a sufficiently refined experiment involving three or more particles could exhibit behaviour going beyond that possible within QM!

It is interesting to contrast our setting with that of decoherence, which models the passage from the microscopic to the macroscopic classical world. The crucial difference here is that decoherence arises from the correlations developed between a given particle and many other inaccessible particles (in the GPT framework it is rather likely that decoherence will always leads to an effective classical theory). By way of contrast, we considered only a few particles in isolation: roughly speaking, we studied the case where only the “local dimensions” are effectively truncated.

Our argument makes use of the the concentration of measure phenomenon, epitomized by Dvoretzky’s theorem which states, roughly, that a random low-dimensional section of a high-dimensional convex body looks approximately spherical (check out the figure in our paper for an illustration). This result then allows us to exploit the observation that spherical state spaces can be simulated by sections of quantum mechanical state spaces. Our approach also owes much to the recent work showing that bipartite correlations may be modelled by QM when the constituents locally obey QM.

Putting all this together we obtained the main result that:

If the local measurements in a GPT are chosen from a typical section of the convex body of all possible measurements then, with a high degree of accuracy, they do not yield any post-quantum prediction for the bipartite scenario.


A quantum generalisation of Talagrand’s inequality

February 13, 2009

Talagrand’s inequality places nontrivial bounds on the growth of the set of strings which are hamming distance {k} away from some subset of strings. In these notes we formulate a quantum generalisation of Talagrand’s inequality which places bounds on the growth of the dimension of the subspace {A} formed by taking the join of all subspaces which are different from {A} in at most {k} places.

The results in this post are joint work with Andreas Winter. Read the rest of this entry »