Randomness through computation
A few months ago, I received a puzzling advertising for this book, Randomness through Computation, and I eventually ordered it, despite getting a rather negative impression from reading the chapter written by Tomasso Toffoli… The book as a whole is definitely perplexing (even when correcting for this initial bias) and I would not recommend it to readers interested in simulation, in computational statistics or even in the philosophy of randomness. My overall feeling is indeed that, while there are genuinely informative and innovative chapters in this book, some chapters read more like newspeak than scientific material (mixing the Second Law of Thermodynamics, Gödel’s incompleteness theorem, quantum physics, and NP completeness within the same sentence) and do not provide a useful entry on the issue of randomness. Hence, the book is not contributing in a significant manner to my understanding of the notion. (This post also appeared on the Statistics Forum.)
“Any attempt to capture the nature of finite randomness is subjective at the end, since it always seems to depend on the observer.” H. Zenil
The structure of the book is uniform in that all chapters involve answers to the questions
- What drew me to the study of computation and randomness
- What we have learned
- What we don’t yet know
- The most important open problems
- The prospects for progress
However, the contents and answers are quite variable in length, from three to twenty-five pages, and in depth, from meaningless chatter to mathematical results. Even though this sounds unbelievable, there is no truly probabilistic chapter about randomness (the above chapter by Toffoli making little sense)! One chapter by Andrew Rukhin covers statistical tests of random generators and this is all about statistics… The book does not seem to contain anything else about pseudo- or quasi-random generators either! (Except for a paragraph in Chapter 5.)
In connection with several earlier posts of mine, if not directly with the topic of the book, Gauvrit and Delahaye wrote an interesting chapter on Benford’s law. They prove an approximation/convergence theorem related to the unimodality of the mean integrand and quantify the approximation total variation error in terms of the bound on this integrand. This chapter also contains [what I find] slightly puzzling results of sequences of distributions converging [possibly via a transform] to the Benford’s law when one of their parameters goes to a boundary of their definition space. For instance, a log-normal variate with variance parameter σ² converges to a Benford’s law when σ² goes to zero. Or a uniform (0,k) rv X is such that the decimal part of πX² converges to a Benford’s law when k goes to infinity. Or an exponential rv X with rate k is such that πX² again converges to a Benford’s law when k goes to zero. Of course, mathematically, there is nothing unsound with the property that, while X has no limiting distribution, the decimal part of a transform of X may still have one… Jean-Paul Delahaye (mentioned in Parcours de mathématiciens) also wrote an updated 1990 survey about Martin-Löf’s definition of randomness, a notion I encountered in 1986 in Rouen during my PhD there, when discussing with Jean-Claude Dellacherie about simulations and pseudo-random generators. (I was then evaluating weird James-Stein estimators using my own congruential generator programmed in Pascal on my Apple II. Running for days!) An interesting trivia is that Delahaye starts with a reference to (and a one page extract from) von Mises‘ Kollectiv, a notion much debated in A search for certainty! For Martin-Löf, a real number (understood as a sequence of digits) r is random if and only if
for every enumerable sequence Ai of sets of intervals, such that μ(Ai)< 2-i, r does not belong to every Ai.
The rest of the chapter goes through the arguments about Martin-Löf-Chaitin’s and Church-Turing theses. (The later identifies recursive functions as those computable with a discrete deterministic finite algorithm.) This notion of Martin-Löf’s randomness is discussed in several chapters of the book.
`”Random” is not the opposite of “deterministic”‘ G. Longo et al
Chapter 5 is far from uninteresting, trying to link notions of randomness in physics (classical=ergodicity and quantum=intrinsic), algorithmic (=à la Martin-Löf), computer science (=nondeterminism) and biology (=evolution). It is very discursive though, with an extensive list of references, and in its learned way manages to remain at such a general level that I emerged none the wiser from reading this chapter. (There is, I think, a typo at the top of page 86: “clarification of the link ! between this dichotomy between (not intrinsic) randomness and the true (intrinsic) one…”)
“We shall use the notion of algorithmic information to discuss what is a physical law, to determine the limits of the axiomatic method, and to analyze Darwin’s theory of evolution.” G. Chaitin
“Intrinsic randomness has an important prediction: it suggest that the randomness we see should be repeatable.” S. Wolfram
Gregory Chaitin writes an essay as Chapter 6 that I do not find particularly deep (despite the ambitious above program!) and that is not directly related to the topic, once again. (I wonder what Alan Sokal would make of this essay…) Chapters 7 and 8 seem equally unhelpful in defining randomness, even though the reference list in the later is as long as the text itself. I was similarly disappointed in Wolfram’s chapter, given his tendency to monumental volumes! It mostly consisted of one sentence paragraphs with very little content…
“Using the concept of Algorithmic Probability, it may be possible to explain why there is something rather than nothing.” H. Zenil
“That model selection error is a necessary part of statistical estimation is an idea that is relatively new, but our understanding of it has been made quite clear by algorithmic probability.” R. Solomonoff
Part III of the book is about Algorithmic inference and artificial intelligence. I had never encountered the term Algorithmic inference so far, hence I was interested in finding more. Unfortunately, this part is not self-sufficient to provide the “more” and hardly compelling to go looking for “more”! The first chapter by Ray Solomonoff (which also happens to be the last article he wrote) is about algorithmic probability, which is a Turing machine turning an infinite iid Bernoulli input into a binary output with finite or infinite length. After reading the chapter, I remain uncertain about the whole point, even though Salomonoff has a somehow Bayesian section (11.2, page 153) on subjectivity which considers the choice of reference [prior] as a “necessary feature” for implementing Bayesian learning. Salomonoff also mentions Rissanen’s minimum description length priors, whose applicability I always found limited… Marcus Hutter maintains that Kolmogorov’s complexity can quantify Ockham’s razor, which he defends in his complete theory of everything as
but he does not provide any illustration. Among his open problems, the need “to convince the (scientific) world that the induction problem is essentially solved”, defining artificial intelligence, “finding a unique “best” universal Turing machine”.
“We have no real proof that randomness even exists in our universe.” E. Allender
Given that this book review is already over the reasonable (at least) in terms of length, I will skip Calude’s “pictures with”, Miller’s survey of Martin-Löf’s randomness, Nies’ mix of travel stories (“I wrote up a 7 page paper in the internet cafés of Panama City”) and formal logic applied to computability, Downey’s sumary of the Downey and Hirschfeldt monograph, with a mention of MCMC algorithms (which only need weak sources of randomness), Ferbus-Zanda and Grigorieff’s dialogue à la Hofstadter (Gödel again and … von Mises as well!), Allender’s further defense of Kolmogorov’s complexity, with an absent “connection to pseudorandomness”, and Li’s alternative defense, in connection with his 2008 book with Vitányi, and Kucera’s and Staiger’s short chapters. Watanabe’s final chapter on the use of randomness in algorithms is much closer to my field of expertise than any of the previous ones, but it seems to stop at the known station that randomness is presumably essential to achieve efficient optimisation algorithms (even though we do not understand why). The book concludes with two long transcripts (100 pages total) of panel discussions at two NKS conferences in 2007 and 2008, about “Is the Universe random?” and “What is computation? (How) does Nature compute? ” that I hope I can cover in another post later. (An important remark at this stage is that NKS stands for New kind of Science, named after Wolfram’s book and sponsoring conferences and summer schools!)