Archive for the Statistics Category

back to the Bayesian Choice

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , on October 17, 2018 by xi'an

Surprisingly (or not?!), I received two requests about some exercises from The Bayesian Choice, one from a group of students from McGill having difficulties solving the above, wondering about the properness of the posterior (but missing the integration of x), to whom I sent back this correction. And another one from the Czech Republic about a difficulty with the term “evaluation” by which I meant (pardon my French!) estimation.

severe testing or severe sabotage? [not a book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on October 16, 2018 by xi'an

Last week, I received this new book of Deborah Mayo, which I was looking forward reading and annotating!, but thrice alas, the book had been sabotaged: except for the preface and acknowledgements, the entire book is printed upside down [a minor issue since the entire book is concerned] and with some part of the text cut on each side [a few letters each time but enough to make reading a chore!]. I am thus waiting for a tested copy of the book to start reading it in earnest!

 

unbiased estimation of log-normalising constants

Posted in Statistics with tags , , , , , , , on October 16, 2018 by xi'an

Maxime Rischard, Pierre Jacob, and Natesh Pillai [warning: both of whom are co-authors and friends of mine!] have just arXived a paper on the use of path sampling (a.k.a., thermodynamic integration) for log-constant unbiased approximation and the resulting consequences on Bayesian model comparison by X validation. If the goal is the estimation of the log of a ratio of two constants, creating an artificial path between the corresponding distributions and looking at the derivative at any point of this path of the log-density produces an unbiased estimator. Meaning that random sampling along the path, corrected by the distribution of the sampling still produces an unbiased estimator. From there the authors derive an unbiased estimator for any X validation objective function, CV(V,T)=-log p(V|T), taking m observations T in and leaving n-m observations T out… The marginal conditional log density in the criterion is indeed estimated by an unbiased path sampler, using a powered conditional likelihood. And unbiased MCMC schemes à la Jacob et al. for simulating unbiased MCMC realisations of the intermediary targets on the path. Tuning it towards an approximately constant cost for all powers.

So in all objectivity and fairness (!!!), I am quite excited by this new proposal within my favourite area! Or rather two areas since it brings together the estimation of constants and an alternative to Bayes factors for Bayesian testing. (Although the paper does not broach upon the calibration of the X validation values.)

ABC intro for Astrophysics

Posted in Books, Kids, Mountains, R, Running, Statistics, University life with tags , , , , , , , , , , , on October 15, 2018 by xi'an

Today I received in the mail a copy of the short book published by edp sciences after the courses we gave last year at the astrophysics summer school, in Autrans. Which contains a quick introduction to ABC extracted from my notes (which I still hope to turn into a book!). As well as a longer coverage of Bayesian foundations and computations by David Stenning and David van Dyk.

Juan Antonio Cano Sanchez (1956-2018)

Posted in Statistics, University life with tags , , , , , , , , on October 12, 2018 by xi'an

I have just learned the very sad news that Juan Antonio Cano, from Universidad de Murcia, with whom Diego Salmerón and I wrote two papers on integral priors, has passed away, after a long fight against a kidney disease. Having communicated with him recently, I am quite shocked by him passing away as I was not aware of his poor health. The last time we met was at the O’Bayes 2015 meeting in Valencià, with a long chat in the botanical gardens of the Universitat de Valencià. Juan Antonio was a very kind and unassuming person, open and friendly, with a continued flow of research in Objective Bayes methodology and in particular on integral priors. Hasta luego, Juan Antonio!

accelerating HMC by learning the leapfrog scale

Posted in Books, Statistics with tags , , , , , , , , on October 12, 2018 by xi'an

In this new arXiv submission that was part of Changye Wu’s thesis [defended last week],  we try to reduce the high sensitivity of the HMC algorithm to its hand-tuned parameters, namely the step size ε  of the discretisation scheme, the number of steps L of the integrator, and the covariance matrix of the auxiliary variables. By calibrating the number of steps of the Leapfrog integrator towards avoiding both slow mixing chains and wasteful computation costs. We do so by learning from the No-U-Turn Sampler (NUTS) of Hoffman and Gelman (2014) which already automatically tunes both the step size and the number of leapfrogs.

The core idea behind NUTS is to pick the step size via primal-dual averaging in a burn-in (warmup, Andrew would say) phase and to build at each iteration a proposal based on following a locally longest path on a level set of the Hamiltonian. This is achieved by a recursive algorithm that, at each call to the leapfrog integrator, requires to evaluate both the gradient of the target distribution and the Hamiltonianitself. Roughly speaking an iteration of NUTS costs twice as much as regular HMC with the same number of calls to the integrator. Our approach is to learn from NUTS the scale of the leapfrog length and use the resulting empirical distribution of the longest leapfrog path to randomly pick the value of  L at each iteration of an HMC scheme. This obviously preserves the validity of the HMC algorithm.

While a theoretical comparison of the convergence performances of NUTS and this eHMC proposal seem beyond our reach, we ran a series of experiments to evaluate these performances, using as a criterion an ESS value that is calibrated by the evaluation cost of the logarithm of target density function and of its gradient, as this is usually the most costly part of the algorithms. As well as a similarly calibrated expected square jumping distance. Above is one such illustration for a stochastic volatility model, the first axis representing the targeted acceptance probability in the Metropolis step. Some of the gains in either ESS or ESJD are by a factor of ten, which relates to our argument that NUTS somewhat wastes computation effort using a uniformly distributed proposal over the candidate set, instead of being close to its end-points, which automatically reduces the distance between the current position and the proposal.

Le Monde puzzle [#1070]

Posted in Books, Kids, R, University life with tags , , , , , , , on October 11, 2018 by xi'an

Rewording Le Monde mathematical puzzle  fifth competition problem

For the 3×3 tables below, what are the minimal number of steps to move from left to rights when the yellow tokens can only be move to an empty location surrounded by two other tokens?

In the 4×4 table below, there are 6 green tokens. How many steps from left to right?

Painful and moderately mathematical, once more… For the first question, a brute force simulation of random valid moves of length less than 100 returns solutions in 4 steps:

     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
     1    1    1    0    0    1    0    0    1
     1    0    1    0    1    1    0    0    1
     0    0    1    1    1    1    0    0    1
     0    0    1    1    0    1    0    1    1
     0    0    1    0    0    1    1    1    1

But this is not an acceptable move because of the “other” constraint. Imposing this constraint leads to a solution in 9 steps, but is this the lowest bound?! It actually took me most of the weekend (apart from a long drive to and from a short half-marathon!) to figure out a better strategy than brute force random exploration: the trick I eventually figured out is to start from the finishing (rightmost) value F of the grid and look at values with solutions available in 1,2,… steps. This is not exactly dynamic programming, but it keeps the running time under control if there is a solution associated with the starting (leftmost) value S. (Robin proceeded reverse-wise, which in retrospect is presumably equivalent, if faster!) The 3×3 grid has 9 choose 5, ie 126, possible configurations with 5 coins, which means the number of cases remains under control. And even so for the 4×4 grid with 6 coins, with 8008 configurations. This led to a 9 step solution for n=3 and the proposed starting grid in yellow:

[1] 1 1 1 0 0 1 0 0 1
[1] 1 1 0 0 1 1 0 0 1
[1] 1 1 0 1 1 0 0 0 1
[1] 0 1 0 1 1 1 0 0 1
[1] 0 1 1 1 0 1 0 0 1
[1] 1 1 1 1 0 0 0 0 1
[1] 0 1 1 1 1 0 0 0 1
[1] 0 0 1 1 1 0 0 1 1
[1] 0 0 1 1 0 0 1 1 1
[1] 0 0 1 0 0 1 1 1 1

and a 19 step solution for n=4:

[1] 1 0 0 0 0 1 0 0 0 1 1 1 0 0 0 1
[1] 1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 1
[1] 1 0 0 0 0 1 1 0 0 0 1 1 0 0 1 0
[1] 1 1 0 0 0 1 1 0 0 0 0 1 0 0 1 0
[1] 1 1 0 0 0 0 1 1 0 0 0 1 0 0 1 0
[1] 1 1 1 0 0 0 1 1 0 0 0 0 0 0 1 0
[1] 1 0 1 1 0 0 1 1 0 0 0 0 0 0 1 0
[1] 1 1 1 1 0 0 1 0 0 0 0 0 0 0 1 0
[1] 1 1 0 1 0 1 1 0 0 0 0 0 0 0 1 0
[1] 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 0
[1] 0 0 0 1 1 1 1 0 0 0 1 0 0 0 1 0
[1] 0 0 0 1 1 1 0 0 0 1 1 0 0 0 1 0
[1] 0 0 0 1 1 1 0 0 1 1 0 0 0 0 1 0
[1] 0 0 0 1 0 1 0 0 1 1 0 0 0 1 1 0
[1] 0 0 0 1 0 1 0 0 1 0 0 0 1 1 1 0
[1] 0 0 0 1 1 1 0 0 1 0 0 0 1 0 1 0
[1] 0 0 0 1 0 1 0 0 1 1 0 0 1 0 1 0
[1] 0 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0
[1] 0 0 0 1 0 1 1 0 0 1 1 0 1 0 0 0

The first resolution takes less than a minute and the second one just a few minutes (or less than my short morning run!). Surprisingly, using this approach does not require more work, which makes me wonder at the solution Le Monde journalists will propose. Given the (misguided) effort put into my resolution, seeing a larger number of points for this puzzle is no wonder.