When I started the ‘Og, in 2008, I was about to run the 23rd edition of the Argentan half-marathon… Seven years later, I am once again getting ready for the race, after a rather good training season, between the mountains of the North Cascade and the track of Malakoff. with the last week in England, Holland, and Canada having seen close to two trainings a day. (Borderline stress injury, maybe!) Weather does not look too bad this year, so we’ll see tomorrow how I fare against myself (and the other V2 runners, incidentally!).
Archive for France
While my arXiv newspage today had a puzzling entry about modelling UFOs sightings in France, it also broadcast our revision of Reliable ABC model choice via random forests, version that we resubmitted today to Bioinformatics after a quite thorough upgrade, the most dramatic one being the realisation we could also approximate the posterior probability of the selected model via another random forest. (With no connection with the recent post on forest fires!) As discussed a little while ago on the ‘Og. And also in conjunction with our creating the abcrf R package for running ABC model choice out of a reference table. While it has been an excruciatingly slow process (the initial version of the arXived document dates from June 2014, the PNAS submission was rejected for not being enough Bayesian, and the latest revision took the whole summer), the slow maturation of our thoughts on the model choice issues led us to modify the role of random forests in the ABC approach to model choice, in that we reverted our earlier assessment that they could only be trusted for selecting the most likely model, by realising this summer the corresponding posterior could be expressed as a posterior loss and estimated by a secondary forest. As first considered in Stoehr et al. (2014). (In retrospect, this brings an answer to one of the earlier referee’s comments.) Next goal is to incorporate those changes in DIYABC (and wait for the next version of the software to appear). Another best-selling innovation due to Arnaud: we added a practical implementation section in the format of FAQ for issues related with the calibration of the algorithms.
While cooking for a late Sunday lunch today [sweet-potatoes röstis], I was listening as usual to the French Public Radio (France Inter) and at some point heard the short [10mn] Périphéries that gives every weekend an insight on the suburbs [on the “other side’ of the Parisian Périphérique boulevard]. The idea proposed by a geographer from Montpellier, Emmanuel Vigneron, was to point out the health inequalities between the wealthy 5th arrondissement of Paris and the not-so-far-away suburbs, by following the RER B train line from Luxembourg to La Plaine-Stade de France…
The disparities between the heart of Paris and some suburbs are numerous and massive, actually the more one gets away from the lifeline represented by the RER A and RER B train lines, so far from me the idea of negating this opposition, but the presentation made during those 10 minutes of Périphéries was quite approximative in statistical terms. For instance, the mortality rate in La Plaine is 30% higher than the mortality rate in Luxembourg and this was translated into the chances for a given individual from La Plaine to die in the coming year are 30% higher than if he [or she] lives in Luxembourg. Then a few minutes later the chances for a given individual from Luxembourg to die are 30% lower than he [or she] lives in La Plaine…. Reading from the above map, it appears that the reference is the mortality rate for the Greater Paris. (Those are 2010 figures.) This opposition that Vigneron attributes to a different access to health facilities, like the number of medical general practitioners per inhabitant, does not account for the huge socio-demographic differences between both places, for instance the much younger and maybe larger population in suburbs like La Plaine. And for other confounding factors: see, e.g., the equally large difference between the neighbouring stations of Luxembourg and Saint-Michel. There is no socio-demographic difference and the accessibility of health services is about the same. Or the similar opposition between the southern suburban stops of Bagneux and [my local] Bourg-la-Reine, with the same access to health services… Or yet again the massive decrease in the Yvette valley near Orsay. The analysis is thus statistically poor and somewhat ideologically biased in that I am unsure the data discussed during this radio show tells us much more than the sad fact that suburbs with less favoured populations show a higher mortality rate.
- 1 – 6 February, 2016 Learning
- 8 – 12 February, 2016 Mathématical statistics
- 15 – 19 February, 2016 Processes
- 22 – 26 February, 2016 Extremes, Copulas and Actuarial Science
- 29 February – 4 March, 2016 Bayesian statistics and algorithms
Each week will see minicourses of a few hours (2-3) and advanced talks, leaving time for interactions and collaborations. (I will give one of those minicourses on Bayesian foundations.) The scientific organisers of the B’ week are Gilles Celeux and Nicolas Chopin.
The CIRM is a wonderful meeting place, in the mountains between Marseilles and Cassis, with many trails to walk and run, and hundreds of fantastic climbing routes in the Calanques at all levels. (In February, the sea is too cold to contemplate swimming. The good side is that it is not too warm to climb and the risk of bush fire is very low!) We stayed there with Jean-Michel Marin a few years ago when preparing Bayesian Essentials. The maths and stats library is well-provided, with permanent access for quiet working sessions. This is the French version of the equally fantastic German Mathematik Forschungsinstitut Oberwolfach. There will be financial support available from the supporting societies and research bodies, at least for young participants and the costs if any are low, for excellent food and excellent lodging. Definitely not a scam conference!
Earlier today, I received an invitation to give a plenary talk at a Probability and Statistics Conference in Marrakech, a nice location if any! As it came from a former graduate student from the University of Rouen (where I taught before Paris-Dauphine), and despite an already heavy travelling schedule for 2016!, I considered his offer. And looked for the conference webpage to find the dates as my correspondent had forgotten to include those. Instead of the genuine conference webpage, which had not yet been created, what I found was a fairly unpleasant scheme playing on the same conference name and location, but run by a predator conglomerate called WASET. WASET stands for World Academy of Science, Engineering, and Technology. Their website lists thousands of conferences, all in nice, touristy, places, and all with an identical webpage. For instance, there is the ICMS 2015: 17th International Conference on Mathematics and Statistics next week. With a huge “conference committee” but no a single name I can identify. And no-one from France. Actually, the website kindly offers entry by city as well as topics, which helps in spotting that a large number of ICMS conferences all take place on the same dates and at the same hotel in Paris… The trick is indeed to attract speakers with the promise of publication in a special issue of a bogus journal and to have them pay 600€ for registration and publication fees, only to have all topics mixed together in a few conference rooms, according to many testimonies I later found on the web. And as clear from the posted conference program! In the “best” of cases since other testimonies mention lost fees and rejected registrations. Testimonies also mention this tendency to reproduce the acronym of a local conference. While it is not unheard of conferences amounting to academic tourism, even from the most established scientific societies!, I am quite amazed at the scale of this enterprise, even though I cannot completely understand how people can fall for it. Looking at the website, the fees, the unrelated scientific committee, and the lack of scientific program should be enough to put those victims off. Unless they truly want to partake to academic tourism, obviously.
The workshop at the BIPM on measurement uncertainty was certainly most exciting, first by its location in the Parc de Saint Cloud in classical buildings overlooking the Seine river in a most bucolic manner…and second by its mostly Bayesian flavour. The recommendations that the workshop addressed are about revisions in the current GUM, which stands for the Guide to the Expression of Uncertainty in Measurement. The discussion centred on using a more Bayesian approach than in the earlier version, with the organisers of the workshop and leaders of the revision apparently most in favour of that move. “Knowledge-based pdfs” came into the discussion as an attractive notion since it rings a Bayesian bell, especially when associated with probability as a degree of belief and incorporating the notion of an a priori probability distribution. And propagation of errors. Or even more when mentioning the removal of frequentist validations. What I gathered from the talks is the perspective drifting away from central limit approximations to more realistic representations, calling for Monte Carlo computations. There is also a lot I did not get about conventions, codes and standards. Including a short debate about the different meanings on Monte Carlo, from simulation technique to calculation method (as for confidence intervals). And another discussion about replacing the old formula for estimating sd from the Normal to the Student’s t case. A change that remains highly debatable since the Student’s t assumption is as shaky as the Normal one. What became clear [to me] during the meeting is that a rather heated debate is currently taking place about the need for a revision, with some members of the six (?) organisations involved arguing against Bayesian or linearisation tools.
This became even clearer during our frequentist versus Bayesian session with a first talk so outrageously anti-Bayesian it was hilarious! Among other things, the notion that “fixing” the data was against the principles of physics (the speaker was a physicist), that the only randomness in a Bayesian coin tossing was coming from the prior, that the likelihood function was a subjective construct, that the definition of the posterior density was a generalisation of Bayes’ theorem [generalisation found in… Bayes’ 1763 paper then!], that objective Bayes methods were inconsistent [because Jeffreys’ prior produces an inadmissible estimator of μ²!], that the move to Bayesian principles in GUM would cost the New Zealand economy 5 billion dollars [hopefully a frequentist estimate!], &tc., &tc. The second pro-frequentist speaker was by comparison much much more reasonable, although he insisted on showing Bayesian credible intervals do not achieve a nominal frequentist coverage, using a sort of fiducial argument distinguishing x=X+ε from X=x+ε that I missed… A lack of achievement that is fine by my standards. Indeed, a frequentist confidence interval provides a coverage guarantee either for a fixed parameter (in which case the Bayesian approach achieves better coverage by constant updating) or a varying parameter (in which case the frequency of proper inclusion is of no real interest!). The first Bayesian speaker was Tony O’Hagan, who summarily shred the first talk to shreds. And also criticised GUM2 for using reference priors and maxent priors. I am afraid my talk was a bit too exploratory for the audience (since I got absolutely no question!) In retrospect, I should have given an into to reference priors.
An interesting specificity of a workshop on metrology and measurement is that they are hard stickers to schedule, starting and finishing right on time. When a talk finished early, we waited until the intended time to the next talk. Not even allowing for extra discussion. When the only overtime and Belgian speaker ran close to 10 minutes late, I was afraid he would (deservedly) get lynched! He escaped unscathed, but may (and should) not get invited again..!