Archive for Université Paris Dauphine

resampling methods

Posted in Books, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , on December 6, 2017 by xi'an

A paper that was arXived [and that I missed!] last summer is a work on resampling by Mathieu Gerber, Nicolas Chopin (CREST), and Nick Whiteley. Resampling is used to sample from a weighted empirical distribution and to correct for very small weights in a weighted sample that otherwise lead to degeneracy in sequential Monte Carlo (SMC). Since this step is based on random draws, it induces noise (while improving the estimation of the target), reducing this noise is preferable, hence the appeal of replacing plain multinomial sampling with more advanced schemes. The initial motivation is for sequential Monte Carlo where resampling is rife and seemingly compulsory, but this also applies to importance sampling when considering several schemes at once. I remember discussing alternative schemes with Nicolas, then completing his PhD, as well as Olivier Cappé, Randal Douc, and Eric Moulines at the time (circa 2004) we were working on the Hidden Markov book. And getting then a somewhat vague idea as to why systematic resampling failed to converge.

In this paper, Mathieu, Nicolas and Nick show that stratified sampling (where a uniform is generated on every interval of length 1/n) enjoys some form of consistent, while systematic sampling (where the “same” uniform is generated on every interval of length 1/n) does not necessarily enjoy this consistency. There actually exists cases where convergence does not occur. However, a residual version of systematic sampling (where systematic sampling is applied to the residuals of the decimal parts of the n-enlarged weights) is itself consistent.

The paper also studies the surprising feature uncovered by Kitagawa (1996) that stratified sampling applied to an ordered sample brings an error of O(1/n²) between the cdf rather than the usual O(1/n). It took me a while to even understand the distinction between the original and the ordered version (maybe because Nicolas used the empirical cdf during his SAD (Stochastic Algorithm Day!) talk, ecdf that is the same for ordered and initial samples).  And both systematic and deterministic sampling become consistent in this case. The result was shown in dimension one by Kitagawa (1996) but extends to larger dimensions via the magical trick of the Hilbert curve.

journée algorithmes stochastiques à Dauphine vendredi

Posted in Statistics, University life with tags , , , , , , , , , , on November 28, 2017 by xi'an

A final reminder (?) that we hold a special day of five talks around stochastic algorithms at Dauphine this Friday, from 10:00 till 17:30. Attendance is free, coffee and tea are free (while they last!), come and join us!

O’Bayes in action

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , on November 7, 2017 by xi'an

My next-door colleague [at Dauphine] François Simenhaus shared a paradox [to be developed in an incoming test!] with Julien Stoehr and I last week, namely that, when selecting the largest number between a [observed] and b [unobserved], drawing a random boundary on a [meaning that a is chosen iff a is larger than this boundary] increases the probability to pick the largest number above ½2…

When thinking about it in the wretched RER train [train that got immobilised for at least two hours just a few minutes after I went through!, good luck to the passengers travelling to the airport…] to De Gaulle airport, I lost the argument: if a<b, the probability [for this random bound] to be larger than a and hence for selecting b is 1-Φ(a), while, if a>b, the probability [of winning] is Φ(a). Hence the only case when the probability is ½ is when a is the median of this random variable. But, when discussing the issue further with Julien, I exposed an interesting non-informative prior characterisation. Namely, if I assume a,b to be iid U(0,M) and set an improper prior 1/M on M, the conditional probability that b>a given a is ½. Furthermore, the posterior probability to pick the right [largest] number with François’s randomised rule is also ½, no matter what the distribution of the random boundary is. Now, the most surprising feature of this coffee room derivation is that these properties only hold for the prior 1/M. Any other power of M will induce an asymmetry between a and b. (The same properties hold when a,b are iid Exp(M).)  Of course, this is not absolutely unexpected since 1/M is the invariant prior and since the “intuitive” symmetry only holds under this prior. Power to O’Bayes!

When discussing again the matter with François yesterday, I realised I had changed his wording of the puzzle. The original setting is one with two cards hiding the unknown numbers a and b and of a player picking one of the cards. If the player picks a card at random, there is indeed a probability of ½ of picking the largest number. If the decision to switch or not depends on an independent random draw being larger or smaller than the number on the observed card, the probability to get max(a,b) in the end hits 1 when this random draw falls into (a,b) and remains ½ outside (a,b). Randomisation pays.

unusual view of my office [jatp]

Posted in pictures, Travel with tags , , , , , , , , on October 18, 2017 by xi'an

what is your favorite teacher?

Posted in Kids, Statistics, University life with tags , , , , , , , , on October 14, 2017 by xi'an

When Jean-Louis Foulley pointed out to me this page in the September issue of Amstat News, about nominating a favourite teacher, I told him it had to be an homonym statistician! Or a practical joke! After enquiry, it dawned on me that this completely underserved inclusion came from a former student in my undergraduate Estimation course, who was very enthusiastic about statistics and my insistence on modelling rather than mathematical validation. He may have been the only one in the class, as my students always complain about not seeing the point in slides with no mathematical result. Like earlier this week when after 90mn on introducing the bootstrap method, a student asked me what was new compared with the Glivenko-Cantelli theorem I had presented the week before… (Thanks anyway to David for his vote and his kind words!)

Le Monde puzzle [#1024]

Posted in Books, Kids with tags , , , , , , , on October 10, 2017 by xi'an

The penultimate and appropriately somewhat Monty Hallesque Le Monde mathematical puzzle of the competition!

A dresser with 5×5 drawers contains a single object in one of the 25 drawers. A player opens a drawer at random and, after each choice, the object moves at random to a drawer adjacent to its current location and the drawer chosen by the player remains open. What is the maximum number of drawers one need to open to find the object?

In a dresser with 9 drawers in a line, containing again a single object, the player opens drawers one at a time, after which the open drawer is closed and the object moves to one of the drawers adjacent to its current location. What is the maximum number of drawers one need to open to find the object?

For the first question, setting a pattern of exploration and, given this pattern, simulating a random walk trying to avoid the said pattern as long as possible is feasible, returning a maximum number of steps over many random walks [and hence a lower bound on the true maximum]. As in the following code

sefavyd=function(pater=seq(1,49,2)%%25+1){
  fild=matrix(0,5,5)
  m=pater[1];i=fild[m]=1
  t=sample((1:25)[-m],1)
  nomove=FALSE
  while (!nomove){
   i=i+1
   m=pater[i];fild[m]=1
   if (t==m){ nomove=TRUE}else{
   muv=NULL
   if ((t-1)%%5>0) muv=c(muv,t-1)
   if (t%%5>0) muv=c(muv,t+1)
   if ((t-1)%/%5>0) muv=c(muv,t-5)
   if (t%/%5<4) muv=c(muv,t+5)
   muv=muv[fild[muv]==0]
   nomove=(length(muv)==0)
   if (!nomove) t=sample(rep(muv,2),1)}
  }
  return(i)}

But a direct reasoning starts from the observation that, while two adjacent drawers are not opened, a random walk can, with non-zero probability, switch indefinitely between both drawers. Hence, a sure recovery of the object requires opening one drawer out of two. The minimal number of drawers to open on a 5×5 dresser is 2+3+2+3+2=12. Since in 12 steps, those drawers are all open, spotting the object may require up to 13 steps.

For the second case, unless I [again!] misread the question, whatever pattern one picks for the exploration, there is always a non-zero probability to avoid discovery after an arbitrary number of steps. The [wrong!] answer is thus infinity. To cross-check this reasoning, I wrote the following R code that mimics a random pattern of exploration, associated by an opportunistic random walk that avoids discovery whenever possible (even with very low probability) bu pushing the object towards the centre,

drawl=function(){
  i=1;t=5;nomove=FALSE
  m=sample((1:9)[-t],1)
  while (!nomove){
    nextm=sample((1:9),1)
    muv=c(t-1,t+1)
    muv=muv[(muv>0)&(muv<10)&(muv!=nextm)] 
    nomove=(length(muv)==0)||(i>1e6)
    if (!nomove) t=sample(rep(muv,2),1,
              prob=1/(5.5-rep(muv,2))^4)
    i=i+1}
  return(i)}

which returns unlimited values on repeated runs. However, I was wrong and the R code unable to dismiss my a priori!, as later discussions with Robin and Julien at Paris-Dauphine exhibited ways of terminating the random walk in 18, then 15, then 14 steps! The idea was to push the target to one of the endpoints because it would then have no option but turning back: an opening pattern like 2, 3, 4, 5, 6, 7, 8, 8 would take care of a hidden object starting in an even drawer, while the following 7, 6, 5, 4, 3, 2 openings would terminate any random path starting from an odd drawer. To double check:

grawl=function(){
  len=0;muvz=c(3:8,8:1)
  for (t in 1:9){
    i=1;m=muvz[i];nomove=(t==m)
    while (!nomove){
     i=i+1;m=muvz[i];muv=c(t-1,t+1)
     muv=muv[(muv>0)&(muv<10)&(muv!=m)]
     nomove=(length(muv)==0)
     if (!nomove)
      t=sample(rep(muv,2),1)}
    len=max(len,i)}
  return(len)}

produces the value 14.

Journée algorithmes stochastiques

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on September 27, 2017 by xi'an

On December 1, 2017, we will hold a day workshop on stochastic algorithms at Université Paris-Dauphine, with the following speakers

 Details and abstracts of the talks are available on the workshop webpage. Attendance is free, but registration is requested towards planning the morning and afternoon coffee breaks. Looking forward seeing ‘Og’s readers there, at least those in the vicinity!

And while I am targetting Parisians, crypto-Bayesians, and nearly-Parisians, there is another day workshop on Bayesian and PAC-Bayesian methods on November 16, at Université Pierre et Marie Curie (campus Jussieu), with invited speakers

and a similar request for (free) registration.