Filed under: Kids, Running Tagged: chocolate, fire, Patrick Roger, Sceaux ]]>

**N**ow, what is unimaginable in the maths behind Borges’ great Library of Babel??? The obvious line of entry to the mathematical aspects of the book is combinatorics: how many different books are there in total? [Ans. 10¹⁸³⁴⁰⁹⁷...] how many hexagons are needed to shelf that many books? [Ans. 10⁶⁸¹⁵³¹...] how long would it take to visit all those hexagons? how many librarians are needed for a Library containing all volumes once and only once? how many different libraries are there [Ans. 10^{10⁶}...] Then the book embarks upon some cohomology, Cavalieri’s infinitesimals (mentioned by Borges in a footnote), Zeno’s paradox, topology (with Klein’s bottle), graph theory (and the important question as to whether or not each hexagon has one or two stairs), information theory, Turing’s machine. The concluding chapters are comments about other mathematical analysis of Borges’ Grand Œuvre and a discussion on how much maths Borges knew.

**S**o a nice escapade through some mathematical landscapes with more or less connection with the original masterpiece. I am not convinced it brings any further dimension or insight about it, or even that one should try to dissect it that way, because it kills the poetry in the story, especially the play around the notion(s) of infinite. The fact that the short story is incomplete [and short on details] makes its beauty: if one starts wondering at the possibility of the Library or at the daily life of the librarians [like, what do they eat? why are they there? where are the readers? what happens when they die? &tc.] the intrusion of realism closes the enchantment! Nonetheless, the unimaginable mathematics of Borges’ Library of Babel provides a pleasant entry into some mathematical concepts and as such may initiate a layperson not too shy of maths formulas to the beauty of mathematics.

Filed under: Books, Statistics, Travel, University life Tagged: book review, Boston, cohomology, combinatorics, infinity, information theory, Jorge Luis Borges, JSM 2014, Library of Babel, Oxford University Press, Turing's machine ]]>

If you have any suggestion of novel directions in computational statistics or instead of dead ends, I would be most interested in hearing them! So please do comment or send emails to my gmail address bayesianstatistics…

Filed under: Books, pictures, R, Statistics, University life Tagged: ABC, Apple II, approximation, BUGS, computational statistics, expectation-propagation, JAGS, MCMC, MCMSki IV, Monte Carlo, optimisation, STAN, statistical computing, sunset, variational Bayes methods ]]>

“For the first nine years of its existence, aside from being appointed the flagship, there was nothing particularly special about it, from a statistical point of view.”

**A** book I grabbed at the last minute in a bookstore, downtown Birmingham. Maybe I should have waited this extra minute… Or picked the other Scalzi’s on the shelf, * Lock In* that just came out! (I already ordered that one for my incomiing lecture in Gainesville. Along with the

“What you’re trying to do is impose causality on random events, just like everyone else here has been doing.”

**W**hat amazes most me is that Scalzi’s *redshirts* got the 2013 Hugo Award. I mean, The Hugo Award?! While I definitely liked the Old Man Wars saga, this novel is more like a light writing experiment and a byproduct of writing a TV series. Enjoyable at a higher conceptual level, but not as a story. Although this is somewhat of a spoiler (!), the title refers to the characters wearing red shirts in Star Trek, who have a statistically significant tendency to die on the next mission. [Not that I knew this when I bought the book! Maybe it would have warned me against the book.] And *redshirts* is about those characters reflecting about how unlikely their fate is (or rather the fate of the characters before them) and rebelling against the series writer. Ensues games with the paradoxes of space travel and doubles. Then games within games. The book is well-written and, once again, enjoyable at some level, with alternative writing styles used in different parts (or coda) of the novel. It still remains a purely intellectual perspective, with no psychological involvement towards those characters. I just cannot relate to the story. Maybe because of the pastiche aspect or of the mostly comic turn. *redshirts* certainly feels very different from those Philip K. Dick stories (e.g., Ubik) where virtual realities abounded without a definitive conclusion on which was which.

Filed under: Books, pictures, Travel Tagged: Birmingham, England, Hugo Awards, John Scalzi, Patrick Rothfuss, redshirts, Star Trek ]]>

“Mon premier marathon je le fais en courant.”

[I will do my first marathon running.]

Filed under: pictures, Running Tagged: Badwater Ultramarathon, Florida, Marathon FL, métro, métro static, Paris, sea, sunset ]]>

“Using ABC to evaluate competing models has various hazards and comes with recommended precautions (Robert et al. 2011), and unsurprisingly, many if not most researchers have a healthy scepticism as these tools continue to mature.”

**M**ichael Hickerson just published an open-access letter with the above title in Molecular Ecology. (As in several earlier papers, incl. the (in)famous ones by Templeton, Hickerson confuses running an ABC algorithm with conducting Bayesian model comparison, but this is not the main point of this post.)

“Rather than using ABC with weighted model averaging to obtain the three corresponding posterior model probabilities while allowing for the handful of model parameters (θ, τ, γ, Μ) to be estimated under each model conditioned on each model’s posterior probability, these three models are sliced up into 143 ‘submodels’ according to various parameter ranges.”

**T**he letter is in fact a supporting argument for the earlier paper of Pelletier and Carstens (2014, Molecular Ecology) which conducted the above splitting experiment. I could not read this paper so cannot judge of the relevance of splitting this way the parameter range. From what I understand it amounts to using mutually exclusive priors by using different supports.

“Specifically, they demonstrate that as greater numbers of the 143 sub-models areevaluated, the inference from their ABC model choice procedure becomes increasingly.”

**A**n interestingly cut sentence. Increasingly unreliable? mediocre? weak?

“…with greater numbers of models being compared, the most probable models are assigned diminishing levels of posterior probability. This is an expected result…”

**T**rue, if the number of models under consideration increases, under a uniform prior over model indices, the posterior probability of a given model mechanically decreases. But the pairwise Bayes factors should not be impacted by the number of models under comparison and the letter by Hickerson states that Pelletier and Carstens found the opposite:

“…pairwise Bayes factor[s] will always be more conservative except in cases when the posterior probabilities are equal for all models that are less probable than the most probable model.”

**W**hich means that the “Bayes factor” in this study is computed as the ratio of a marginal likelihood and of a compound (or super-marginal) likelihood, averaged over all models and hence incorporating the prior probabilities of the model indices as well. I had never encountered such a proposal before. Contrary to the letter’s claim:

“…using the Bayes factor, incorporating all models is perhaps more consistent with the Bayesian approach of incorporating all uncertainty associated with the ABC model choice procedure.”

**B**esides the needless inclusion of ABC in this sentence, a somewhat confusing sentence, as Bayes factors are not, *stricto sensu*, Bayesian procedures since they remove the prior probabilities from the picture.

“Although the outcome of model comparison with ABC or other similar likelihood-based methods will always be dependent on the composition of the model set, and parameter estimates will only be as good as the models that are used, model-based inference provides a number of benefits.”

**A**ll models are wrong but the very fact that they are models allows for producing pseudo-data from those models and for checking if the pseudo-data is similar enough to the observed data. In components that matters the most for the experimenter. Hence a loss function of sorts…

Filed under: Statistics, University life Tagged: ABC, Bayes factor, Bayesian model choice, George Box, model posterior probabilities, Molecular Ecology, phylogenetic model, phylogeography ]]>

Filed under: pictures, Running, Travel Tagged: Danube, Donau, Donauinsel, heron, morning run ]]>

“I confess that in my early in my career as a physicist I was rather cynical about sophisticated statistical tools, being of the opinion that “if any of this makes a difference, just get more data”. That is, if you do enough experiments, the confidence level will be so high that the exact statistical treatment you use to evaluate it is irrelevant.” John Butterworth, Sept. 15, 2014

**A**fter Val Johnson‘s suggestion to move the significant level from .05 down to .005, hence roughly from 2σ up to 3σ, John Butterworth, a physicist whose book Smashing Physics just came out, discusses in The Guardian the practice of using 5σ in Physics. It is actually induced by Louis Lyons’ arXival of a recent talk with the following points (discussed below):

- Should we insist on the 5 sigma criterion for discovery claims?
- The probability of A, given B, is not the same as the probability of B, given A.
- The meaning of p-values.
- What is Wilks Theorem and when does it not apply?
- How should we deal with the `Look Elsewhere Effect’?
- Dealing with systematics such as background parametrisation.
- Coverage: What is it and does my method have the correct coverage?
- The use of p0 versus p1 plots.

**B**utterworth’s conclusion is worth reproducing:

“…there’s a need to be clear-eyed about the limitations and advantages of the statistical treatment, wonder what is the “elsewhere” you are looking at, and accept that your level of certainty may never feasibly be 5σ. In fact, if the claims being made aren’t extraordinary, a one-in-2million chance of a mistake may indeed be overkill, as well being unobtainable. And you have to factor in the consequences of acting, or failing to act, based on the best evidence available – evidence that should include a good statistical treatment of the data.” John Butterworth, Sept. 15, 2014

esp. the part about the “consequences of acting”, which I interpret as incorporating a loss function in the picture.

**L**ouis’s paper-ised talk 1. [somewhat] argues in favour of the 5σ because 2σ and 3σ are not necessarily significant on larger datasets. I figure the same could be said of 5σ, no?! He also mentions (a) “systematics”, which I do not understand. Even though this is not the first time I encounter the notion in Physics. And (b) “subconscious Bayes factors”, which means that the likelihood ratio [considered here as a transform of the p-value] is moderated by the ratio of the prior probabilities, even when people do not follow a Bayesian procedure. But this does not explain why a fixed deviation from the mean should be adopted. 2. and 3. The following two points are about the common confusion in the use of the p-value, found in most statistics textbooks. Even though the defence of the p-value against the remark that it is wrong half the time (as in Val’s PNAS paper) misses the point. 4. *Wilk’s theorem* is a warning that the χ² approximation only operates under some assumptions. 5. *Looking elsewhere* is the translation of multiple testing or cherry-picking. 6. *Systematics* is explained here as a form of model misspecification. One suggestion is to use a Bayesian modelling of this misspecification, another non-parametrics (why not both together?!). 7. *Coverage* is somewhat disjunct from the other points as it explains the [frequentist] meaning of the coverage of a confidence interval. Which hence does not apply to the actual data. 8. *p0 versus p1* plots is a sketchy part referring to a recent proposal by the author. So in the end a rather anticlimactic coverage of standard remarks, surprisingly giving birth to a sequence of posts (incl. this one!)…

Filed under: Books, Statistics, University life Tagged: Bayesian modeling, five sigma, John Butterworth, likelihood ratio, Louis Lyons, p-values, PNAS, The Guardian, Valen Johnson ]]>

“All models are wrong, and increasingly you can succeed without them.”

quote that I found rather shocking, esp. when considering the amount of modelling behind Google tools. And coming from someone citing Kernel Methods for Pattern Analysis by Shawe-Taylor and Christianini as one of his favourite books and Bayesian Data Analysis as another one… Or displaying Bayes [or his alleged portrait] and Turing in his book cover. So I went searching on the Web for more information about this surprising quote. And found the explanation, as given by Peter Norvig himself:

“To set the record straight: That’s a silly statement, I didn’t say it, and I disagree with it.”

Which means that weird quotes have a high probability of being misquotes. And used by others to (obviously) support their own agenda. In the current case, Chris Anderson and his End of Theory paradigm. Briefly and mildly discussed by Andrew a few years ago.

Filed under: Books, pictures, Statistics, Travel, University life Tagged: Alan Turing, all models are wrong, artificial intelligence, George Box, misquote, Peter Norvig, statistical modelling, The End of Theory, Thomas Bayes ]]>

Filed under: pictures, Travel Tagged: Austria, Baroque architecture, Franz Joseph I, Habsburgs, Schönbrunn palace, Unesco World Heritage List, Vienna ]]>