Archive for the pictures Category

trick or treat?!

Posted in Kids, pictures, Travel, Wines with tags , , , on September 24, 2016 by xi'an

Two weeks ago, we went to a local restaurant, connected to my running grounds, for dinner. While the setting in a 16th building that was part of the original Sceaux castle was quite nice, the fare was mediocre and the bill more suited for a one star Michelin than dishes I could have cooked myself. The height (or rather bottom) of the meal was a dish of sardines consisting in an half-open pilchard can… Just dumped on a plate with a slice of bread. It could have been a genius stroke from the chef had the sardines been cooked and presented in the can, alas it sounded more like the act of an evil genie! Or more plainly a swindle. As those tasty sardines came straight from the shop!

Chateau Puech-Haut

Posted in pictures, Wines with tags , , , , , , , on September 20, 2016 by xi'an

snapshots from Nature

Posted in Books, Kids, pictures, University life with tags , , , , , , , , , , on September 19, 2016 by xi'an

Among many interesting things I read from the pile of Nature issues that had accumulated over a month of travelling, with a warning these are mostly “old” news by now!:

  • the very special and untouched case of Cuba in terms of the Zika epidemics, thanks to a long term policy fighting mosquitoes at all levels of the society;
  • an impressive map of the human cortex, which statistical analysis would be fascinating;
  • an excerpt from Nature 13 August 1966 where the Poisson distribution was said to describe the distribution of scores during the 1966 World Cup;
  • an analysis of a genetic experiment on evolution involving 50,000 generations (!) of Escherichia coli;
  • a look back at the great novel Flowers for Algernon, novel I read eons ago;
  • a Nature paper on the first soft robot, or octobot, along with some easier introduction, which did not tell which kind of operations could be accomplished by such a robot;
  • a vignette on a Science paper about the interaction between honey hunters and hunting birds, which I also heard depicted on the French National Radio, with an experiment comparing the actual hunting (human) song, a basic sentence in the local language, and the imitation of the song of another bird. I could not understand why the experiment did not include hunting songs from other hunting groups, as they are highly different but just as effective. It would have helped in understanding how innate the reaction of the bird is;
  • another literary entry at the science behind Mary Shelley’s Frankenstein;
  • a study of the Mathematical Genealogy Project in terms of the few mathematicians who started most genealogies of mathematicians, including d’Alembert, advisor to Laplace of whom I am one of the many descendants, although the finding is not that astounding when considering usual genealogies where most branches die off and the highly hierarchical structure of power in universities of old.

random walk on a torus [riddle]

Posted in Books, Kids, pictures with tags , , , , , , , , , on September 16, 2016 by xi'an

Galgate, Lancastershire, July 19, 2011The Riddler of this week(-end) has a simple riddle to propose, namely given a random walk on the {1,2,…,N} torus with a ⅓ probability of death, what is the probability of death occurring at the starting point?

The question is close to William Feller’s famous Chapter III on random walks. With his equally famous reflection principle. Conditioning on the time n of death, which as we all know is definitely absorbing (!), the event of interest is a passage at zero, or any multiple of N (omitting the torus cancellation), at time n-1 (since death occurs the next time). For a passage in zero, this does not happen if n is even (since n-1 is odd) and else it is a Binomial event with probability

{n \choose \frac{n-1}{2}} 2^{-n}

For a passage in kN, with k different from zero, kN+n must be odd and the probability is then

{n \choose \frac{n-1+kN}{2}} 2^{-n}

which leads to a global probability of

\sum_{n=0}^\infty \dfrac{2^n}{3^{n+1}} \sum_{k=-\lfloor (n-1)/N \rfloor}^{\lfloor (n+1)/N \rfloor} {n \choose \frac{n-1+kN}{2}} 2^{-n}

i.e.

\sum_{n=0}^\infty \dfrac{1}{3^{n+1}} \sum_{k=-\lfloor (n-1)/N \rfloor}^{\lfloor (n+1)/N \rfloor} {n \choose \frac{n-1+kN}{2}}

Since this formula is rather unwieldy I looked for another approach in a métro ride [to downtown Paris to enjoy a drink with Stephen Stiegler]. An easier one is to allocate to each point on the torus a probability p[i] to die at position 1 and to solve the system of equations that is associated with it. For instance, when N=3, the system of equations is reduced to

p_0=1/3+2/3 p_1, \quad p_1=1/3 p_0 + 1/3 p_1

which leads to a probability of ½ to die at position 0 when leaving from 0. When letting N grows to infinity, the torus structure no longer matters and the probability of dying at position 0 implies returning in position 0, which is a special case of the above combinatoric formula, namely

\sum_{m=0}^\infty \dfrac{1}{3^{2m+1}}  {2m \choose m}

which happens to be equal to

\dfrac{1}{3}\,\dfrac{1}{\sqrt{1-4/9}}=\dfrac{1}{\sqrt{5}}\approx 0.4472

as can be [unnecessarily] checked by a direct R simulation. This √5 is actually the most surprising part of the exercise!

local kernel reduction for ABC

Posted in Books, pictures, Statistics, University life with tags , , , , , on September 14, 2016 by xi'an

“…construction of low dimensional summary statistics can be performed as in a black box…”

Today Zhou and Fukuzumi just arXived a paper that proposes a gradient-based dimension reduction for ABC summary statistics, in the spirit of RKHS kernels as advocated, e.g., by Arthur Gretton. Here the projection is a mere linear projection Bs of the vector of summary statistics, s, where B is an estimated Hessian matrix associated with the posterior expectation E[θ|s]. (There is some connection with the latest version of Li’s and Fearnhead’s paper on ABC convergence as they also define a linear projection of the summary statistics, based on asymptotic arguments, although their matrix does depend on the true value of the parameter.) The linearity sounds like a strong restriction [to me] especially when the summary statistics have no reason to belong to a vectorial space and thus be open to changes of bases and linear projections. For instance, a specific value taken by a summary statistic, like 0 say, may be more relevant than the range of their values. On a larger scale, I am doubtful about always projecting a vector of summary statistics on a subspace with the smallest possible dimension, ie the dimension of θ. In practical settings, it seems impossible to derive the optimal projection and a subvector is almost certain to loose information against a larger vector.

“Another proposal is to use different summary statistics for different parameters.”

Which is exactly what we did in our random forest estimation paper. Using a different forest for each parameter of interest (but no real tree was damaged in the experiment!).

Savage-Dickey supermodels

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on September 13, 2016 by xi'an

The Wider Image: Bolivia's cholita climbers: Combination picture shows Aymara indigenous women (L-R) Domitila Alana, 42, Bertha Vedia, 48, Lidia Huayllas, 48, and Dora Magueno, 50, posing for a photograph at the Huayna Potosi mountain, Bolivia April 6, 2016Combination picture shows Aymara indigenous women (L-R) Domitila Alana, 42, Bertha Vedia, 48, Lidia Huayllas, 48, and Dora Magueno, 50, posing for a photograph at the Huayna Potosi mountain, Bolivia April 6, 2016. (c.) REUTERS/David Mercado. REUTERS/David MercadoA. Mootoovaloo, B. Bassett, and M. Kunz just arXived a paper on the computation of Bayes factors by the Savage-Dickey representation through a supermodel (or encompassing model). (I wonder why Savage-Dickey is so popular in astronomy and cosmology statistical papers and not so much elsewhere.) Recall that the trick is to write the Bayes factor in favour of the encompasssing model as the ratio of the posterior and of the prior for the tested parameter (thus eliminating nuisance or common parameters) at its null value,

B10=π(φ⁰|x)/π(φ⁰).

Modulo some continuity constraints on the prior density, and the assumption that the conditional prior on nuisance parameter is the same under the null model and the encompassing model [given the null value φ⁰]. If this sounds confusing or even shocking from a mathematical perspective, check the numerous previous entries on this topic on the ‘Og!

The supermodel created by the authors is a mixture of the original models, as in our paper, and… hold the presses!, it is a mixture of the likelihood functions, as in Phil O’Neill’s and Theodore Kypraios’ paper. Which is not mentioned in the current paper and should obviously be. In the current representation, the posterior distribution on the mixture weight α is a linear function of α involving both evidences, α(m¹-m²)+m², times the artificial prior on α. The resulting estimator of the Bayes factor thus shares features with bridge sampling, reversible jump, and the importance sampling version of nested sampling we developed in our Biometrika paper. In addition to O’Neill and Kypraios’s solution.

The following quote is inaccurate since the MCMC algorithm needs simulating the parameters of the compared models in realistic settings, hence representing the multidimensional integrals by Monte Carlo versions.

“Though we have a clever way of avoiding multidimensional integrals to calculate the Bayesian Evidence, this new method requires very efficient sampling and for a small number of dimensions is not faster than individual nested sampling runs.”

I actually wonder at the sheer rationale of running an intensive MCMC sampler in such a setting, when the weight α is completely artificial. It is only used to jump from one model to the next, which sound quite inefficient when compared with simulating from both models separately and independently. This approach can also be seen as a special case of Carlin’s and Chib’s (1995) alternative to reversible jump. Using instead the Savage-Dickey representation is of course infeasible. Which makes the overall reference to this method rather inappropriate in my opinion. Further, the examples processed in the paper all involve (natural) embedded models where the original Savage-Dickey approach applies. Creating an additional model to apply a pseudo-Savage-Dickey representation does not sound very compelling…

Incidentally, the paper also includes a discussion of a weird notion, the likelihood of the Bayes factor, B¹², which is plotted as a distribution in B¹², most strangely. The only other place I met this notion is in Murray Aitkin’s book. Something’s unclear there or in my head!

“One of the fundamental choices when using the supermodel approach is how to deal with common parameters to the two models.”

This is an interesting question, although maybe not so relevant for the Bayes factor issue where it should not matter. However, as in our paper, multiplying the number of parameters in the encompassing model may hinder convergence of the MCMC chain or reduce the precision of the approximation of the Bayes factor. Again, from a Bayes factor perspective, this does not matter [while it does in our perspective].

Darwin’s radio [book review]

Posted in Books, Kids, pictures, University life with tags , , , , , , , , , , , , , , , , on September 10, 2016 by xi'an

When in Sacramento two weeks ago I came across the Beers Books Center bookstore, with a large collection of used and (nearly) new cheap books and among other books I bought Greg Bear’s Darwin Radio. I had (rather) enjoyed another book of his’, Hull Zero Three, not to mention one of his first books, Blood Music, I read in the mid 1980’s, and the premises of this novel sounded promising, not mentioning the Nebula award. The theme is of a major biological threat, apparently due to a new virus, and of the scientific unraveling of what the threat really means. (Spoilers alert!) In that respect it sounds rather similar to the (great) Crichton‘s The Andromeda Strain, which is actually mentioned by some characters in this book. As is Ebola, as a sort of contrapoint (since Ebola is a deadly virus, although the epidemic in Western Africa now seems to have vanished). The biological concept exploited here is dormant DNA in non-coding parts of the genome that periodically get awaken and induce massive steps in the evolution. So massive that carriers of those mutations are killed by locals. Until the day it happens in an all-connected World and the mutation can no longer be stopped. The concept is compelling if not completely convincing of course, while the outcome of a new human race, which is to Homo Sapiens what Homo Sapiens was to Neanderthal, is rather disappointing. (How could it be otherwise?!) But I did appreciate the postulate of a massive and immediate change in the genome, even though the details were disputable and the dismissal of Dawkins‘ perspective poorly defended. From a stylistic perspective, the style is at time heavy, while there are too many chance occurrences, like the main character happening to be in Georgia for a business deal (spoilers, spoilers!) at the times of the opening of collective graves, or the second main character coming upon a couple of Neanderthal mummies with a Sapiens baby, or yet this pair of main characters falling in love and delivering a live mutant baby-girl. But I enjoyed reading it between San Francisco and Melbourne, with a few hours of lost sleep and work. It is a page turner, no doubt! I also like the political undercurrents, from riots to emergency measures, to an effective dictatorship controlling pregnancies and detaining newborns and their mothers.

One important thread in the book deals with anthropology digs getting against Native claims to corpses and general opposition to such digs. This reminded me of a very recent article in Nature where a local Indian tribe had claimed rights to several thousand year old skeletons, whose DNA was then showed to be more related with far away groups than the claimants. But where the tribe was still granted the last word, in a rather worrying jurisprudence.