What is the ultimate theory governing Nature? All evidence to date strongly suggests that quantum mechanics (QM) is this theory. However, it could still be that QM is simply a really good effective theory which breaks down if we are able to perform experiments with sufficiently high energy and precision. If this is the case, what sort of “post-quantum theory” could QM be replaced with? Assuming only that special relativity is correct, one can postulate “generalised probabilistic theories” (GPTs) as the framework to explore such alternatives.

What is a GPT? To define it one only imposes a handful of axioms, which are introduced in a paper of Barrett, based on previous work by Hardy. They are:

**Assumption 1**. The state of a single system can be completely specified by listing the probabilities for the outcomes of some subset *F* of all possible measurements. These are the fiducial measurements. These probabilities can be written arranged in a vector *P*.

**Assumption 2**. For each type of system, the set of allowed normalized states is closed and convex. The complete set of states *S* is the convex hull of the set of allowed normalized states and 0.

**Assumption 3**. For each type of system, there is a set *T* of allowed transformations. A set of transformations *T* mapping *S* to itself. The set *T* includes the transformation that maps all states *P* to 0 and is convex.

**Assumption 4** (Local Operations Commute). Consider a joint system composed of systems A and B. Suppose that an operation is performed on system A alone with outcome *O*A and an operation on system B alone with outcome *O*B. The final unnormalized state of the joint system does not depend on the order in which the operations were performed. In particular, this implies that the joint probability of getting outcomes *O*A and *O*B does not depend on the ordering of the operations.

These aren’t the only sets of axioms one could use, and there has been plenty of work tinkering with them. However, for convenience, we take these as our working definition of a GPT. One of the consequences of these assumptions is that the set of possible states, *S*, and the set of measurement effects, *F*, are dual convex bodies. Thus, basically, a GPT is completely specified by the assumptions after specifying either *S* or *F*. We choose to work with *F*.

GPTs have received considerable attention recently, both as a foil to better understand the features of QM, and as a powerful abstract way to reason about correlations and relativity. These investigations have lead to many interesting results, including simplified and improved cryptographic schemes and primitives. An interesting consequence of a GPT beyond QM is that they yield violations of CHSH inequalities beyond those possible in QM, so they are basically ruled out by experiment.

But, let’s suppose for the moment that Nature isn’t described by QM, and rather by some other GPT. A natural question then arises: why is QM such a good effective theory? A natural answer, which we investigate in a recent preprint, is that experimental imperfections prevent us from observing any post-quantum phenomena.

Suppose that Nature is described by a GPT with a high-dimensional state space and corresponding high-dimensional set of all possible measurements. Observational limitations, such as detector resolution, mean that it is impossible to access most of these theoretically possible measurements. If physically implementable measurements are those chosen from some typical subset (a precise definition is given our paper) then we show that the bipartite correlations arising in any experiment can be modelled, to a high degree of precision, by those of QM. Note that the tripartite and multipartite correlations could go beyond those exhibited by QM: a sufficiently refined experiment involving three or more particles could exhibit behaviour going beyond that possible within QM!

It is interesting to contrast our setting with that of decoherence, which models the passage from the microscopic to the macroscopic classical world. The crucial difference here is that decoherence arises from the correlations developed between a given particle and many other inaccessible particles (in the GPT framework it is rather likely that decoherence will always leads to an effective classical theory). By way of contrast, we considered only a few particles in isolation: roughly speaking, we studied the case where only the “local dimensions” are effectively truncated.

Our argument makes use of the the concentration of measure phenomenon, epitomized by Dvoretzky’s theorem which states, roughly, that a random low-dimensional section of a high-dimensional convex body looks approximately spherical (check out the figure in our paper for an illustration). This result then allows us to exploit the observation that spherical state spaces can be simulated by sections of quantum mechanical state spaces. Our approach also owes much to the recent work showing that bipartite correlations may be modelled by QM when the constituents locally obey QM.

Putting all this together we obtained the main result that:

If the local measurements in a GPT are chosen from a typical section of the convex body of all possible measurements then, with a high degree of accuracy, they do not yield any post-quantum prediction for the bipartite scenario.

This looks like a very interesting paper, Tobias, and I will enjoy working through the technical details.

Before I do so, can I ask you to elucidate in non-technical terms what you mean by “typical measurement”?

Put concisely – are the measurements in a typical lab experiment “typical” in this technical sense?

Dear Dan,

Many thanks for your interest and your comment, I greatly appreciate it!

In response to your first question: think of all possible measurements in a theory as being convex combinations of “effects” which, themselves, are simply nonnegative functions of a state. In QM the effects are simply elements of a POVM. Our notion of typicality in a GPT is that these “generalised POVM” elements are each chosen “randomly with gaussian-distributed” entries. (The quotes are to indicate that there is a technical definition which differs slightly from this intuitive version, however, the idea is identical). Once these “noisy” POVM elements are chosen they are assumed fixed, once and for all, for each local degree of freedom. You can label them E_1, E_2, … if you like and you are free to attach meaning to them, e.g., E_1 might be “spin up” etc. etc.

Note that the number of outcomes of a measurement are not assumed to be chosen randomly, i.e., you are free to build up a measurement from the generalized POVM elements E_1, E_2, with as many outcomes as you like, etc.

For your second question, i.e., are typical lab experiments “typical”: the short answer is we have no idea! 🙂 All evidence points to QM as the correct theory, so we have no experimental guidance how to map the POVM elements of QM into a more generalized GPT! But suppose, for the moment, that there was a more general GPT than QM describing Nature. In this case, POVM elements such as “spin up”, “polarisation up”, etc., would need to have counterparts in the GPT. If we suppose that these counterparts were perturbed by noisiness (in our case, randomness from a random matrix ensemble), then our claim is that experiments using these generalised POVMs wouldn’t be able to generate *bipartite* correlations going beyond QM.

We believe that it is likely our result also holds in the case where the generalized POVM elements, or effects, are only perturbed by a random element. This would be much harder to prove, however. (Basically as long as the probability distribution is Lipschitz continuous with respect to a random matrix ensemble, everything should work.)

I hope that helps!

Sincerely,

Tobias