Archive for the Books Category

Le Monde puzzle [#1024]

Posted in Books, Kids with tags , , , , , , , on October 10, 2017 by xi'an

The penultimate and appropriately somewhat Monty Hallesque Le Monde mathematical puzzle of the competition!

A dresser with 5×5 drawers contains a single object in one of the 25 drawers. A player opens a drawer at random and, after each choice, the object moves at random to a drawer adjacent to its current location and the drawer chosen by the player remains open. What is the maximum number of drawers one need to open to find the object?

In a dresser with 9 drawers in a line, containing again a single object, the player opens drawers one at a time, after which the open drawer is closed and the object moves to one of the drawers adjacent to its current location. What is the maximum number of drawers one need to open to find the object?

For the first question, setting a pattern of exploration and, given this pattern, simulating a random walk trying to avoid the said pattern as long as possible is feasible, returning a maximum number of steps over many random walks [and hence a lower bound on the true maximum]. As in the following code

sefavyd=function(pater=seq(1,49,2)%%25+1){
  fild=matrix(0,5,5)
  m=pater[1];i=fild[m]=1
  t=sample((1:25)[-m],1)
  nomove=FALSE
  while (!nomove){
   i=i+1
   m=pater[i];fild[m]=1
   if (t==m){ nomove=TRUE}else{
   muv=NULL
   if ((t-1)%%5>0) muv=c(muv,t-1)
   if (t%%5>0) muv=c(muv,t+1)
   if ((t-1)%/%5>0) muv=c(muv,t-5)
   if (t%/%5<4) muv=c(muv,t+5)
   muv=muv[fild[muv]==0]
   nomove=(length(muv)==0)
   if (!nomove) t=sample(rep(muv,2),1)}
  }
  return(i)}

But a direct reasoning starts from the observation that, while two adjacent drawers are not opened, a random walk can, with non-zero probability, switch indefinitely between both drawers. Hence, a sure recovery of the object requires opening one drawer out of two. The minimal number of drawers to open on a 5×5 dresser is 2+3+2+3+2=12. Since in 12 steps, those drawers are all open, spotting the object may require up to 13 steps.

For the second case, unless I [again!] misread the question, whatever pattern one picks for the exploration, there is always a non-zero probability to avoid discovery after an arbitrary number of steps. The [wrong!] answer is thus infinity. To cross-check this reasoning, I wrote the following R code that mimics a random pattern of exploration, associated by an opportunistic random walk that avoids discovery whenever possible (even with very low probability) bu pushing the object towards the centre,

drawl=function(){
  i=1;t=5;nomove=FALSE
  m=sample((1:9)[-t],1)
  while (!nomove){
    nextm=sample((1:9),1)
    muv=c(t-1,t+1)
    muv=muv[(muv>0)&(muv<10)&(muv!=nextm)] 
    nomove=(length(muv)==0)||(i>1e6)
    if (!nomove) t=sample(rep(muv,2),1,
              prob=1/(5.5-rep(muv,2))^4)
    i=i+1}
  return(i)}

which returns unlimited values on repeated runs. However, I was wrong and the R code unable to dismiss my a priori!, as later discussions with Robin and Julien at Paris-Dauphine exhibited ways of terminating the random walk in 18, then 15, then 14 steps! The idea was to push the target to one of the endpoints because it would then have no option but turning back: an opening pattern like 2, 3, 4, 5, 6, 7, 8, 8 would take care of a hidden object starting in an even drawer, while the following 7, 6, 5, 4, 3, 2 openings would terminate any random path starting from an odd drawer. To double check:

grawl=function(){
  len=0;muvz=c(3:8,8:1)
  for (t in 1:9){
    i=1;m=muvz[i];nomove=(t==m)
    while (!nomove){
     i=i+1;m=muvz[i];muv=c(t-1,t+1)
     muv=muv[(muv>0)&(muv<10)&(muv!=m)]
     nomove=(length(muv)==0)
     if (!nomove)
      t=sample(rep(muv,2),1)}
    len=max(len,i)}
  return(len)}

produces the value 14.

mea culpa!

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on October 9, 2017 by xi'an

An entry about our Bayesian Essentials book on X validated alerted me to a typo in the derivation of the Gaussian posterior..! When deriving the posterior (which was left as an exercise in the Bayesian Core), I just forgot the term expressing the divergence between the prior mean and the sample mean. Mea culpa!!!

Children of Time [book review]

Posted in Books, pictures, Travel with tags , , , , , , , , , , on October 8, 2017 by xi'an

I came by this book in the common room of the mathematics department of the University of Warwick, which I visit regularly during my stays there, for it enjoys a book sharing box where I leave the books I’ve read (and do not want to carry back to Paris) and where I check for potential catches… One of these books was Tchaikovsky’s children of time, a great space-opera novel à la Arthur C Clarke, which got the 2016 Arthur C Clarke award, deservedly so (even though I very much enjoyed the long way to a small angry planet, Tchaikosky’s book is much more of an epic cliffhanger where the survival of an entire race is at stake). The children of time are indeed the last remnants of the human race, surviving in an artificial sleep aboard an ancient spaceship that irremediably deteriorates. Until there is no solution but landing on a terraformed planet created eons ago. And defended by an AI spanned (or spammed) by the scientist in charge of the terra-formation, who created a virus that speeds up evolution, with unintended consequences. Given that the strength of the book relies on these consequences, I cannot get into much details about the alternative pathway to technology (incl. artificial intelligence) followed by the inhabitants of this new world, and even less about the conclusive chapters that make up for a rather slow progression towards this final confrontation. An admirable and deep book I will most likely bring back to the common room on my next trip to Warwick! (As an aside I wonder if the title was chosen in connection with Goya’s picture of Chronus [Time] devouring his children…)

the nihilist girl [book review]

Posted in Books, Kids with tags , , , , , , , , , , , , on October 7, 2017 by xi'an

When stopping by an enticing bookstore on Rue Saint-Jacques, in front of La Sorbonne, last July, I came across a book by the mathematician Sofia Kovaleskaya called the nihilist girl. Having never heard of non-mathematical books written by this Russian mathematician whose poster stood in my high school classroom, I bought it (along with other summer reads). And then discovered that besides being a woman of many “firsts”, from getting a PhD at Heidelberg (under Weirstraß) to getting a professor position in Stockholm, to being nominated to a Chair in the Russian Academy of Sciences, she also took an active part in the Commune de Paris, along with many emigrated Russian revolutionaries (or nihilists). Which explains for this book about a nihilist girl leaving everything to follow a revolutionary deported to Siberia. While not autobiographical (Sweden is not Siberia!), the novel contains many aspects inspired from the (amazing if desperately short) life of Sofia Kovaleskaya herself. A most interesting coincidence is that Sofia’s sister, Anna, was engaged for a while to Fyodor Dostoyevsky, whose novel The Demons takes the opposite view on nihilists. (As a feminist and anarchist, Anna took a significant part in the Commune de Paris, to the point of having to flee to Switzerland to escape deportation to New Caledonia, while her husband was sentenced to death.) The book itself is not particularly enjoyable, as being quite naïve in its plot and construction. It is nonetheless a great testimony of the situation of Russia in the 19th Century and of the move of the upper-class liberals towards revolutionary ideals, while the exploited peasant class they wanted to free showed no inclination to join them. I think Dostoyevsky expresses much more clearly this most ambiguous posturing of the cultivated classes at the time, yearning for more freedom and fairness for all, but fearing the Tsarist police, unable to connect with the peasantry, and above all getting a living from revenues produced by their farmlands.

Barker at the Bernoulli factory

Posted in Books, Statistics with tags , , , , , , , on October 5, 2017 by xi'an

Yesterday, Flavio Gonçalves, Krzysztof Latuszýnski, and Gareth Roberts (Warwick) arXived a paper on Barker’s algorithm for Bayesian inference with intractable likelihoods.

“…roughly speaking Barker’s method is at worst half as good as Metropolis-Hastings.”

Barker’s acceptance probability (1965) is a smooth if less efficient version of Metropolis-Hastings. (Barker wrote his thesis in Adelaide, in the Mathematical Physics department. Most likely, he never interacted with Ronald Fisher, who died there in 1962) This smoothness is exploited by devising a Bernoulli factory consisting in a 2-coin algorithm that manages to simulate the Bernoulli variable associated with the Barker probability, from a coin that can simulate Bernoulli’s with probabilities proportional to [bounded] π(θ). For instance, using a bounded unbiased estimator of the target. And another coin that simulates another Bernoulli on a remainder term. Assuming the bound on the estimate of π(θ) is known [or part of the remainder term]. This is a neat result in that it expands the range of pseudo-marginal methods (and resuscitates Barker’s formula from oblivion!). The paper includes an illustration in the case of the far-from-toyish Wright-Fisher diffusion. [Making Fisher and Barker meeting, in the end!]

Bayesian spectacles

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , on October 4, 2017 by xi'an

E.J. Wagenmakers and his enthusiastic team of collaborators at University of Amsterdam and in the JASP software designing team have started a blog called Bayesian spectacles which I find a fantastic title. And not only because I wear glasses. Plus, they got their own illustrator, Viktor Beekman, which sounds like the epitome of sophistication! (Compared with resorting to vacation or cat pictures…)

In a most recent post they addressed the criticisms we made of the 72 author paper on p-values, one of the co-authors being E.J.! Andrew already re-addressed some of the address, but here is a disagreement he let me to chew on my own [and where the Abandoners are us!]:

Disagreement 2. The Abandoners’ critique the UMPBTs –the uniformly most powerful Bayesian tests– that features in the original paper. This is their right (see also the discussion of the 2013 Valen Johnson PNAS paper), but they ignore the fact that the original paper presented a series of other procedures that all point to the same conclusion: p-just-below-.05 results are evidentially weak. For instance, a cartoon on the JASP blog explains the Vovk-Sellke bound. A similar result is obtained using the upper bounds discussed in Berger & Sellke (1987) and Edwards, Lindman, & Savage (1963). We suspect that the Abandoners’ dislike of Bayes factors (and perhaps their upper bounds) is driven by a disdain for the point-null hypothesis. That is understandable, but the two critiques should not be mixed up. The first question is Given that we wish to test a point-null hypothesis, do the Bayes factor upper bounds demonstrate that the evidence is weak for p-just-below-.05 results? We believe they do, and in this series of blog posts we have provided concrete demonstrations.

Obviously, this reply calls for an examination of the entire BS blog series, but being short in time at the moment, let me point out that the upper lower bounds on the Bayes factors showing much more support for H⁰ than a p-value at 0.05 only occur in special circumstances. Even though I spend some time in my book discussing those bounds. Indeed, the [interesting] fact that the lower bounds are larger than the p-values does not hold in full generality. Moving to a two-dimensional normal with potentially zero mean is enough to see the order between lower bound and p-value reverse, as I found [quite] a while ago when trying to expand Berger and Sellker (1987, the same year as I was visiting Purdue where both had a position). I am not sure this feature has been much explored in the literature, I did not pursue it when I realised the gap was missing in larger dimensions… I must also point out I do not have the same repulsion for point nulls as Andrew! While considering whether a parameter, say a mean, is exactly zero [or three or whatever] sounds rather absurd when faced with the strata of uncertainty about models, data, procedures, &tc.—even in theoretical physics!—, comparing several [and all wrong!] models with or without some parameters for later use still makes sense. And my reluctance in using Bayes factors does not stem from an opposition to comparing models or from the procedure itself, which is quite appealing within a Bayesian framework [thus appealing per se!], but rather from the unfortunate impact of the prior [and its tail behaviour] on the quantity and on the delicate calibration of the thing. And on a lack of reference solution [to avoid the O and the N words!]. As exposed in the demise papers. (Which main version remains in a publishing limbo, the onslaught from the referees proving just too much for me!)

Jayanta Kumar Ghosh [1937-2017]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , on October 2, 2017 by xi'an

Just head from Sonia and Judith that our friend and fellow Bayesian Jayanta K Ghosh (জয়ন্ত কুমার ঘোষ in Bengali) has passed away a few days ago in Lafayette. He was a wonderful man, very kind to everyone and open for discussing all aspects of Bayesian theory and methodology. While he worked on many branches of statistics, he is more know to Bayesians for his contributions to Bayesian asymptotics. From Bernstein-von-Mises convergence theorems to frequentist validation of non-informative priors, to the Bayesian analysis of infinite dimensional problems, including consistency of posteriors and rates of convergence, and to Bayesian and Empirical Bayes model selection rules in high dimensional problems. He also wrote an introductory textbook on Bayesian Statistics ten years ago with Mohan Delampady and Tapas Samanta. And a monograph of higher order asymptotics. I knew from this summer that J K was quite sick and am quite sad to learn of his demise. He will be missed by all for his gentleness and by Bayesians for his contributions to the fields of objective and non-parametric Bayesian statistics…