Archive for École Polytechnique

principles of uncertainty (second edition)

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , on July 21, 2020 by xi'an

A new edition of Principles of Uncertainty is about to appear. I was asked by CRC Press to review the new book and here are some (raw) extracts from my review. (Some comments may not apply to the final and published version, mind.)

In Chapter 6, the proof of the Central Limit Theorem utilises the “smudge” technique, which is to add an independent noise to both the sequence of rvs and its limit. This is most effective and reminds me of quite a similar proof Jacques Neveu used in its probability notes in Polytechnique. Which went under the more formal denomination of convolution, with the same (commendable) purpose of avoiding Fourier transforms. If anything, I would have favoured a slightly more condensed presentation in less than 8 pages. Is Corollary 6.5.8 useful or even correct??? I do not think so because the non-centred average rescaled by √n diverges almost surely. For the same reason, I object to the very first sentence of Section 6.5 (p.246)

In Chapter 7, I found a nice mention of (Hermann) Rubin’s insistence on not separating probability and utility as only the product matters. And another fascinating quote from Keynes, not from his early statistician’s years, but in 1937 as an established economist

“The sense in which I am using the term uncertain is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to the summed.”

(is the last sentence correct? I would have expected, pardon my French!, “to be summed”). Further interesting trivia on the criticisms of utility theory, including de Finetti’s role and his own lack of connection with subjective probability principles.

In Chapter 8, a major remark (iii) is found p.293 about the fact that a conjugate family requires a dominating measure (although this is expressed differently since the book shies away from introducing measure theory, ) reminds me of a conversation I had with Jay when I visited Carnegie Mellon in 2013 (?). Which exposes the futility of seeing conjugate priors as default priors. It is somewhat surprising that a notion like admissibility appears as a side quantity when discussing Stein’s paradox in 8.2.1 [and then later in Section 9.1.3] while it seems to me to be central to Bayesian decision theory, much more than the epiphenomenon that Stein’s paradox represents in the big picture. But the book dismisses minimaxity even faster in Section 9.1.4:

As many who suffer from paranoia have discovered, one can always dream-up an even worse possibility to guard against. Thus, the minimax framework is unstable. (p.336)

Interesting introduction of the Wishart distribution to kindly handle random matrices and matrix Jacobians, with the original space being the p(p+1)/2 real space (implicitly endowed with the Lebesgue measure). Rather than a more structured matricial space. A font error makes Corollary 8.7.2 abort abruptly. The space of positive definite matrices is mentioned in Section8.7.5 but still (implicitly) corresponds to the common p(p+1)/2 real Euclidean space. Another typo in Theorem 8.9.2 with a Frenchised version of Dirichlet, Dirichelet. Followed by a Dirchlet at the end of the proof (p.322). Again and again on p.324 and on following pages. I would object to the singular in the title of Section 8.10 as there are exponential families rather than a single one. With no mention made of Pitman-Koopman lemma and its consequences, namely that the existence of conjugacy remains an epiphenomenon. Hence making the amount of pages dedicated to gamma, Dirichlet and Wishart distributions somewhat excessive.

In Chapter 9, I noticed (p.334) a Scheffe that should be Scheffé (and again at least on p.444). (I love it that Jay also uses my favorite admissible (non-)estimator, namely the constant value estimator with value 3.) I wonder at the worth of a ten line section like 9.3, when there are delicate issues in handling meta-analysis, even in a Bayesian mood (or mode). In the model uncertainty section, Jay discuss the (im)pertinence of both selecting one of the models and setting independent priors on their respective parameters, with which I disagree on both levels. Although this is followed by a more reasonable (!) perspective on utility. Nice to see a section on causation, although I would have welcomed an insert on the recent and somewhat outrageous stand of Pearl (and MacKenzie) on statisticians missing the point on causation and counterfactuals by miles. Nonparametric Bayes is a new section, inspired from Ghahramani (2005). But while it mentions Gaussian and Dirichlet [invariably misspelled!] processes, I fear it comes short from enticing the reader to truly grasp the meaning of a prior on functions. Besides mentioning it exists, I am unsure of the utility of this section. This is one of the rare instances where measure theory is discussed, only to state this is beyond the scope of the book (p.349).

Monsieur le Président [reposted]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on April 11, 2020 by xi'an

Let us carry out screening campaigns on representative samples of population!

Mr President of the Republic, as you rightly indicated, we are at war and everything must be done to combat the spread of CODIV-19. You had the wisdom to surround yourself with a Scientific Council and an Analysis, Research and Expertise Committee, both competent, and, as you know, applied mathematicians, statisticians have a role to play in this battle. Yes, to predict the evolution of the epidemic, mathematical models are used at different scales. This allows us estimate the number of people infected in the coming weeks and months. We are at war and these predictions are essential to the development of the best control strategy. They inform political decisions. This is especially with the help of these items of information that the confinement of the French population has been decided and renewed.

Mr President we are at war and these predictions must be the most robust possible. The more precise they are, the better the decisions they will guide. Mathematical models include a number of unknown parameters whose values ​​should be set based on expert advice or data. These include the transmission rate, incubation time, contagion time, and, of course, to initialize dynamic mathematical models, the number of covered individuals. To enjoy more reliable predictions, it is necessary to better estimate such crucial quantities. The proportion of healthy carriers appears to be a particularly critical parameter.

Mr President, we are at war and we must assess the proportions of healthy carriers by geographic areas. We do not currently have the means to implement massive screenings, but we can carry out surveys. This means, for a well-defined geographic area, to run biological tests on samples of individuals that are drawn at random and are representative of the total population of the area. Such data would come to supplement those already available and would considerably reduce the uncertainty in model predictions.

Mr. President, we are at war, let us give ourselves the means to fight effectively against this scourge. Thanks to a significant effort, the number of individuals that can be tested daily increases significantly, let’s devote some of these available tests to samples representative. For each individual drawn at random, we will perform a nasal swab, a blood test, let us collect clinical data and other items of information on its follow-up barriers. This would provide important information on the percentage of immunized French people. This data would open the possibility to feed mathematical models wisely, and hence to make informed decisions about the different strategies of deconfinement.

Mr. President, we are at war. This strategy, which could at first be deployed only in the most affected sectors, is, we believe, essential. It is doable: designing the survey and determining a representative sample is not an issue, going to the homes of the people in the sample, towards taking samples and having them fill out a questionnaire is also perfectly achievable if we give ourselves the means to do so. You only have to decide that a few of the available PCR tests and serological tests will be devoted to these statistical studies. In Paris and in the Grand Est, for instance, a mere few thousand tests on a representative population of individuals properly selected could better assess the situation and help in taking informed decisions.

Mr. President, a proposal to this effect has been presented to the Scientific Council and to the Analysis, Research and Expertise Committee that you have set up by a group of mathematicians at École Polytechnique with Professor Josselin Garnier at their head. You will realise by reading this tribune that the statistician that I am does support very strongly. I am in no way disputing the competence of the councils which support you but you have to act quickly and, I repeat, only dedicate a few thousand tests to statistics studies. Emergency is everywhere, assistance to the patients, to people in intensive care, must of course be the priority, but let us attempt to anticipate as well . We do not have the means to massively test the entire population, let us run polls.

Jean-Michel Marin
Professeur à l’Université de Montpellier
Président de la Société Française de Statistique
Directeur de l’Institut Montpelliérain Alexander Grothendieck
Vice-Doyen de la Faculté des Sciences de Montpellier

assistant/associate professor position in statistics/machine-learning at ENSAE

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on March 10, 2020 by xi'an

ENSAE (my Alma Mater) is opening a new position for next semester in statistics or/and machine-learning. At the Assistant Professor level, the position is for an initial three-year term, renewable for another three years, before the tenure evaluation. The school is located on the Université Paris-Saclay campus, only teaches at the Master and PhD levels, and the deadline for application is 31 March 2020. Details and contacts on the call page.

ABC-SAEM

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on October 8, 2019 by xi'an

In connection with the recent PhD thesis defence of Juliette Chevallier, in which I took a somewhat virtual part for being physically in Warwick, I read a paper she wrote with Stéphanie Allassonnière on stochastic approximation versions of the EM algorithm. Computing the MAP estimator can be done via some adapted for simulated annealing versions of EM, possibly using MCMC as for instance in the Monolix software and its MCMC-SAEM algorithm. Where SA stands sometimes for stochastic approximation and sometimes for simulated annealing, originally developed by Gilles Celeux and Jean Diebolt, then reframed by Marc Lavielle and Eric Moulines [friends and coauthors]. With an MCMC step because the simulation of the latent variables involves an untractable normalising constant. (Contrary to this paper, Umberto Picchini and Adeline Samson proposed in 2015 a genuine ABC version of this approach, paper that I thought I missed—although I now remember discussing it with Adeline at JSM in Seattle—, ABC is used as a substitute for the conditional distribution of the latent variables given data and parameter. To be used as a substitute for the Q step of the (SA)EM algorithm. One more approximation step and one more simulation step and we would reach a form of ABC-Gibbs!) In this version, there are very few assumptions made on the approximation sequence, except that it converges with the iteration index to the true distribution (for a fixed observed sample) if convergence of ABC-SAEM is to happen. The paper takes as an illustrative sequence a collection of tempered versions of the true conditionals, but this is quite formal as I cannot fathom a feasible simulation from the tempered version and not from the untempered one. It is thus much more a version of tempered SAEM than truly connected with ABC (although a genuine ABC-EM version could be envisioned).

data science summer school à l’X

Posted in Statistics with tags , , , , , , on January 10, 2019 by xi'an