Truly random [again]

“The measurement outputs contain at the 99% confidence level 42 new random bits. This is a much stronger statement than passing or not passing statistical tests, which merely indicate that no obvious non-random patterns are present.” arXiv:0911.3427

As often, I bought La Recherche in the station newsagent for the wrong reason! The cover of the December issue was about “God and Science” and I thought this issue would bring some interesting and deep arguments in connection with my math and realism post. The debate is very short, does not go in any depth. reproduces the Hawking’s quote that started the earlier post, and recycles the same graph about cosmology I used last summer in Vancouver! However, there are alternative interesting entries about probabilistic proof checking in Mathematics and truly random numbers… The first part is on an ACM paper on the PCP theorem by Irit Dinur, but is too terse as is (while the theory behind presumably escapes my abilities!). The second part is about a paper in Nature published by Pironio et al. and arXived as well. It is entitled “Random numbers certified by Bell’s Theorem” and also is one of the laureates of the La Recherche prize this year. I was first annoyed by the French coverage of the paper, mentioning that “a number was random with a probability of 99%” (?!) and that “a sequence of numbers is  perfectly random” (re-?!). The original paper is however stating the same thing, hence stressing the different meaning associated to randomness by those physicists, “the unpredictable character of the outcomes” and “universally-composable security”. The above “probability of randomness” is actually a p-value (associated with the null hypothesis that Bell’s inequality is not violated) that is equal to 0.00077. (So the above quote is somehow paradoxical!) The huge apparatus used to produce those random events is not very efficient: on average, 7 binary random numbers are detected per hour… A far cry from the “truly random” generator produced by Intel!

Ps-As a concidence, Julien Cornebise pointed out to me that there is a supplement in the journal about “Le Savoir du Corps” which is in fact handled by the pharmaceutical company Servier, currently under investigation for its drug Mediator… A very annoying breach of basic journalistic ethics in my opinion!

10 Responses to “Truly random [again]”

  1. See a much longer, more detailed, and more productive paper, Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling Published April 11, 2018,

    In conversations with one of the authors (Shalm), I learned that existing physical processes for generating random numbers are not sufficiently uniform, to the extent that simulations based on them have come to incorrect conclusions. I’m still looking for details.

  2. […] interesting consequences in parallel implementations where randomness becomes questionable, or in physical random generators, whose independence may also be questionable… […]

  3. La Recherche seems familiar with having corporate involvement: the latest issue has a special booklet on oil research and bituminous sands sponsored by …Total!

  4. […] generators, even though they work well. He even goes further as to warn about bias because even the truly random generators are discrete. The book covers the pseudo-random generators, starting with the original […]

  5. The RSS e-News points out a BBC show about “how to make sure numbers are really random”. No statistician was part of the panel, it appears,…

  6. You’re right: the word “random” in the Nature paper is more to be understood as “unpredictable” than as “characterized by a given probability distribution”.
    In particular, when you consider radioactive decay, the process appears random from a statistical point of view, but you cannot rule out the existence of a hidden variable whose knowledge would make the process deterministic. The violation of a so-called “Bell inequality”, however, is incompatible with any such (local) hidden variable model for your process, meaning that no amount of knowledge could ever allow you to predict with certainty the outcome of your experiment. This is the kind of “randomness” considered by Pironio et al.
    I’m curious: do these kinds of consideration make any sense to a statistician?

    • I think I understand the fundamental result about the Bell inequality. From my statistician’s perspective, however, I really mind only about the (statistical) distribution of the outcomes of random generators. Which means a sequence that stands testing by die-hard filters like Marsaglia’s battery. This is the only reason I have been posting entries against “true” randomness claims. Once again, those criticisms are only pertaining to the fit to a distribution. Complete unpredictability and causality vs. non-causality escape my realm and I have no statistician’s opinion to express!

  7. To be fair, the point of the paper by Pironio et al. is not to produce “good” random numbers in a statistical sense and indeed their protocol fails on that matter.

    The idea is that one can prove that a random process was “truly” random (but possibly biased) in the sense that the outcome of the experiment could not have been known (even by God) before the experiment was performed.
    In my opinion, it’s quite an amazing feature of Nature that such a claim can be proven.

    You could argue that flipping a coin does the same thing, namely that you cannot predict beforehad the outcome, but in principle you could (if you were able to model precisely enough your coin, the way you threw it, etc). In fact, the same applies for any usual random process: with enough computational power, you would be in principle able to predict its outcome.

    In this paper, the authors show that, even in principle, the outcome of the experiment based on a violation of a Bell inequality is not predetermined.

    • This sounds like very old-fashioned (Laplacian) determinism… Anyway, I agree that we are not talking of the same think when we consider randomness! Probabilistic phenomena are found in other parts of physics, like radio-active decomposition, and while they can be explained at a certain level, hence are “predictable”, they remain random in the probabilistic sense that they can be characterised by a probability distribution (e.g., Poisson). To call a single number random thus does not make sense in this perspective.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: