Archive for the pictures Category

help for Cox’s Bazar

Posted in Kids, pictures, Travel with tags , , , , , , , on October 25, 2020 by xi'an

Nous continuerons, Professeur.

Posted in Kids, pictures with tags , , , , , , , , on October 24, 2020 by xi'an

“Nous continuerons, Professeur. Avec tous les instituteurs et professeurs de France, nous enseignerons l’histoire, ses gloires comme ses vicissitudes. Nous ferons découvrir la littérature, la musique, toutes les œuvres de l’âme et de l’esprit. Nous aimerons de toutes nos forces le débat, les arguments raisonnables, les persuasions aimables. Nous aimerons la science et ses controverses. Comme vous, nous cultiverons la tolérance. Comme vous, nous chercherons à comprendre, sans relâche, et à comprendre encore davantage cela qu’on voudrait éloigner de nous. Nous apprendrons l’humour, la distance. Nous rappellerons que nos libertés ne tiennent que par la fin de la haine et de la violence, par le respect de l’autre.”

Emmanuel Macron, 21 October 2020

parking riddle

Posted in Books, Kids, pictures, R, Statistics, Travel with tags , , , , , , on October 23, 2020 by xi'an

The Riddler of this week had a quick riddle: if one does want to avoid parallel parking a car over a six spot street, either the first spot is available or two consecutive spots are free. What is the probability this happens with 4 other cars already parked (at random)?

While a derivation by combinatorics easily returns 9/15 as the probability to fail to park, a straightforward R code does as well

 l=0
  for(t in 1:1e6){
  k=sort(sample(0:5,4))
  l=l+1*(!!k[1]|3%in%diff(k)|!k[4]%%3)}

since

 
> round(choose(6,2)*F/1e6)
[1] 10

QMC at CIRM

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 21, 2020 by xi'an

marginal likelihood with large amounts of missing data

Posted in Books, pictures, Statistics with tags , , , , , , , , on October 20, 2020 by xi'an

In 2018, Panayiota Touloupou, research fellow at Warwick, and her co-authors published a paper in Bayesian analysis that somehow escaped my radar, despite standing in my first circle of topics of interest! They construct an importance sampling approach to the approximation of the marginal likelihood, the importance function being approximated from a preliminary MCMC run, and consider the special case when the sampling density (i.e., the likelihood) can be represented as the marginal of a joint density. While this demarginalisation perspective is rather usual, the central point they make is that it is more efficient to estimate the sampling density based on the auxiliary or latent variables than to consider the joint posterior distribution of parameter and latent in the importance sampler. This induces a considerable reduction in dimension and hence explains (in part) why the approach should prove more efficient. Even though the approximation itself is costly, at about 5 seconds per marginal likelihood. But a nice feature of the paper is to include the above graph that includes both computing time and variability for different methods (the blue range corresponding to the marginal importance solution, the red range to RJMCMC and the green range to Chib’s estimate). Note that bridge sampling does not appear on the picture but returns a variability that is similar to the proposed methodology.