Archive for the University life Category

Fourth Bayesian, Fiducial, and Frequentist Conference

Posted in Books, pictures, Statistics, Travel, University life, Wines with tags , , , , , , , on March 29, 2017 by xi'an

Next May 1-3, I will attend the 4th Bayesian, Fiducial and Frequentist Conference at Harvard University (hopefully not under snow at that time of year), which is a meeting between philosophers and statisticians about foundational thinking in statistics and inference under uncertainty. This should be fun! (Registration is now open.)

GG Day in Rouen

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on March 26, 2017 by xi'an

[Notice: This post is fairly “local” in that it is about a long-time friend being celebrated by his university. Nice poster though and an opportunity to stress his essential contributions to the maths department there!]

Next June, I will spend the day in Rouen for a conference celebrating the career and dedication of Gérard Grancher to mathematics and the maths department there. (When I got invited I had not realised I was to give the research talk of the day!) Gérard Granger is a CNRS engineer and a statistician who is indissociable from the maths department in Rouen, where he spent his whole career, now getting quite close to [mandatory] retirement! I am very happy to take part in this celebration as Gérard has always been an essential component of the department there, driving the computer structure, reorganising the library, disseminating the fun of doing maths to high schools around and to the general public, and always a major presence in the department,  whom I met when I started my PhD there (!) Working on the local computers in Pascal and typing my thesis with scientific word (!!)

parameter space for mixture models

Posted in Statistics, University life with tags , , , on March 24, 2017 by xi'an

“The paper defines a new solution to the problem of defining a suitable parameter space for mixture models.”

When I received the table of contents of the incoming Statistics & Computing and saw a paper by V. Maroufy and P. Marriott about the above, I was quite excited about a new approach to mixture parameterisation. Especially after our recent reposting of the weakly informative reparameterisation paper. Alas, after reading the paper, I fail to see the (statistical) point of the whole exercise.

Starting from the basic fact that mixtures face many identifiability issues, not only invariance by component permutation, but the possibility to add spurious components as well, the authors move to an entirely different galaxy by defining mixtures of so-called local mixtures. Developed by one of the authors. The notion is just incomprehensible for me: the object is a weighted sum of the basic component of the original mixture, e.g., a Normal density, and of k of its derivatives wrt its mean, a sort of parameterised Taylor expansion. Which implies the parameter is unidimensional, incidentally. The weights of this strange mixture are furthermore constrained by the positivity of the resulting mixture, a constraint that seems impossible to satisfy in the Normal case when the number of derivatives is odd. And hard to analyse in any case since possibly negative components do not enjoy an interpretation as a probability density. In exponential families, the local mixture is the original exponential family density multiplied by a polynomial. The current paper moves one step further [from the reasonable] by considering mixtures [in the standard sense] of such objects. Which components are parameterised by their mean parameter and a collection of weights. The authors then restrict the mean parameters to belong to a finite and fixed set, which elements are coerced by a maximum error rate on any compound distribution derived from this exponential family structure. The remainder of the paper discusses of the choice of the mean parameters and of an EM algorithm to estimate the parameters, with a confusing lower bound on the mixture weights that impacts the estimation of the weights. And no mention made of the positivity constraint. I remain completely bemused by the paper and its purpose: I do not even fathom how this qualifies as a mixture.

and it only gets worse…

Posted in Kids, pictures, Travel, University life with tags , , , , , , , , on March 23, 2017 by xi'an

“Trump wants us to associate immigrants with criminality. That is the reason behind a weekly published list of immigrant crimes – the first of which was made public on Monday. Singling out the crimes of undocumented immigrants has one objective: to make people view them as deviant, dangerous and fundamentally undesirable. ” The Guardian, March 22, 2017

“`I didn’t want this job. I didn’t seek this job,’ Tillerson told the Independent Journal Review (IJR), in an interview (…) `My wife told me I’m supposed to do this.'” The Guardian, March 22, 2017

“…under the GOP plan, it estimated that 24 million people of all ages would lose coverage over 10 years (…) Trump’s plan, for instance, would cut $5.8 billion from the National Institutes of Health, an 18 percent drop for the $32 billion agency that funds much of the nation’s research into what causes different diseases and what it will take to treat them.” The New York Times, March 5, 2017

X-Outline of a Theory of Statistical Estimation

Posted in Books, Statistics, University life with tags , , , , , , , , , , on March 23, 2017 by xi'an

While visiting Warwick last week, Jean-Michel Marin pointed out and forwarded me this remarkable paper of Jerzy Neyman, published in 1937, and presented to the Royal Society by Harold Jeffreys.

“Leaving apart on one side the practical difficulty of achieving randomness and the meaning of this word when applied to actual experiments…”

“It may be useful to point out that although we are frequently witnessing controversies in which authors try to defend one or another system of the theory of probability as the only legitimate, I am of the opinion that several such theories may be and actually are legitimate, in spite of their occasionally contradicting one another. Each of these theories is based on some system of postulates, and so long as the postulates forming one particular system do not contradict each other and are sufficient to construct a theory, this is as legitimate as any other. “

This paper is fairly long in part because Neyman starts by setting Kolmogorov’s axioms of probability. This is of historical interest but also needed for Neyman to oppose his notion of probability to Jeffreys’ (which is the same from a formal perspective, I believe!). He actually spends a fair chunk on explaining why constants cannot have anything but trivial probability measures. Getting ready to state that an a priori distribution has no meaning (p.343) and that in the rare cases it does it is mostly unknown. While reading the paper, I thought that the distinction was more in terms of frequentist or conditional properties of the estimators, Neyman’s arguments paving the way to his definition of a confidence interval. Assuming repeatability of the experiment under the same conditions and therefore same parameter value (p.344).

“The advantage of the unbiassed [sic] estimates and the justification of their use lies in the fact that in cases frequently met the probability of their differing very much from the estimated parameters is small.”

“…the maximum likelihood estimates appear to be what could be called the best “almost unbiassed [sic]” estimates.”

It is also quite interesting to read that the principle for insisting on unbiasedness is one of producing small errors, because this is not that often the case, as shown by the complete class theorems of Wald (ten years later). And that maximum likelihood is somewhat relegated to a secondary rank, almost unbiased being understood as consistent. A most amusing part of the paper is when Neyman inverts the credible set into a confidence set, that is, turning what is random in a constant and vice-versa. With a justification that the credible interval has zero or one coverage, while the confidence interval has a long-run validity of returning the correct rate of success. What is equally amusing is that the boundaries of a credible interval turn into functions of the sample, hence could be evaluated on a frequentist basis, as done later by Dennis Lindley and others like Welch and Peers, but that Neyman fails to see this and turn the bounds into hard values. For a given sample.

“This, however, is not always the case, and in general there are two or more systems of confidence intervals possible corresponding to the same confidence coefficient α, such that for certain sample points, E’, the intervals in one system are shorter than those in the other, while for some other sample points, E”, the reverse is true.”

The resulting construction of a confidence interval is then awfully convoluted when compared with the derivation of an HPD region, going through regions of acceptance that are the dual of a confidence interval (in the sampling space), while apparently [from my hasty read] missing a rule to order them. And rejecting the notion of a confidence interval being possibly empty, which, while being of practical interest, clashes with its frequentist backup.

Paris-Dauphine photograph competition [jatp]

Posted in pictures, University life with tags , , , , , , on March 22, 2017 by xi'an

Abel Prize goes to Yves Meyer

Posted in Books, pictures, University life with tags , , , , , , , , on March 21, 2017 by xi'an

  

Just heard the great news that the Abel Prize for 2017 goes to Yves Meyer! Yves Meyer is an emeritus professor at École Normale de Cachan and has produced fundamental contributions to number theory, operator theory and harmonic analysis. He is one of the originators of the theory of wavelets and multiresolution analysis. Among other recognitions and prizes, he was an invited speaker at the International Congress of Mathematicians in 1970 (Nice), in 1983 (Warsaw), and in 1990 (Kyoto), and was awarded the Gauß Prize in 2010. Congratulations and total respect to Yves Meyer!!!

meyer