## Not so Fooled by Randomness (2)

As mentioned earlier, there are some points in ** Fooled by Randomness **where Taleb’s randomness takes an extra-mathematical turn whose relevance may be questioned. For instance,

“Probability theory is a young arrival in mathematics; probability applied to practice is almost non-existent as a discipline” (p.xli)

**T**his sounds singularly bizarre: first, there is this implicit notion that probability opposes the rigorism of mathematics (as stressed also in the Notes of ** Fooled by Randomness**), which confuses the object of the theory (events) with the mathematics of it where nothing is uncertain. Second, applied probability is a thriving field, from networks to statistics, to machine learning.

“Where statistics fails us is when distributions are not symmetric. (…) If there is a very small probability of finding a red ball, then our knowledge will increase more slowly than at the expected square root of n.” (p.112)

**T**his is wrong. Asymmetries in the distributions do not prevent statistical principles to operate (take a Gamma or a log-Normal distribution), nor does it prevent the CLT to apply to the estimate of the proportion of red balls in an urn problem. The following paragraph considers a more complex urn model when the proportion of reds changes at each draw, but the conclusion that *“knowledge derived through statistics is shaky”* is hasty: we can then infer about the marginal distribution of the draws and then indirectly about the distribution of the proportion (this is an inverse problem).

“A long series of coin flips (…) may get eight heads in a row.” (p.155)

**A**ctually, the author would have benefited from a better look at Feller’s ** An Introduction to Probability Theory and Its Applications**, because the phenomenon is even more surprising: in a majority of cases, most of the runs will be spent on one side of the zero line since the proportion of wins will then be distributed according to the arcsine law.

“…there is no such thing as a pure random draw for the outcome of the draw depends on the quality of the equipment.” (p.169)

**P**ure randomness may be a new notion in probability, but I think the author means uniformly distributed, even though he distinguishes earlier between random and equiprobable. The limited quality of the equipment adds an extra level of noise and thus does not remove the randomness, just convolutes two sources of randomness. (Very “pure” generators can be constructed from radioactive materials since they produce exponential random variables.)

“…the probability of guessing correctly depends on past success…” (p.177)

**T**his description of the Polya urn process is impossible to understand to anyone not familiar with…the Polya urn process! The standard Polya urn is nonetheless easy to explain, just add *c* balls of the colour you just drawn from the urn before repeating the draw (and easy to run via a Monte Carlo simulator).

“Independence is a requirement for working with the math of probability” (p.177)

**A**gain wrong. Especially surprising when considering earlier & further developments in ** Fooled by Randomness** about time series and stochastic processes.

“Monte Carlo simulations (…) get results where mathematics fails us.” (p.178)

**T**his is very poetic, maybe, but rather non-sensical as well: a Monte Carlo simulation requires a mathematical construct to make sense. If there is no probability distribution behind the simulation, it cannot be run. Obviously, from a practical point of view, the author means that it avoids some harduous computations, but Monte Carlo remains stuck within the mathematical realm, will it or not!

“…one single observation, the worst possible inferential mistake a person can make.” (p.195)

**F**irst, this seems to mean that you cannot make inference with one single observation (where is Bayes when you need him?!). Second, this contradicts the “impossible inference” argument that adding observations does not necessarily add to your knowledge. Overall, this is a logical non-sense: why “worst” and why “mistake”?! (The little story around this quote does not add to the understanding, except with the concept of overstating the importance of small samples, which has no precise statistical meaning…)

“We cannot instinctively understand the nonlinear aspect of probability” (p.215)

**N**onlinearity is a weird concept when applied to probability. When looking further into this page, it seems as if it means that the randomness at different scales changes in a very “nonlinear” way. Still does not make much sense.

## Leave a Reply