## Archive for Cambridge

## stained glass to go

Posted in pictures, University life with tags Caius & Gonville College, Cambridge, Cambridge colleges, controversies, design of experiments, DNA helix, eugenics, Francis Crick, latin square, Ronald Fisher, stained glass, Venn diagram on July 6, 2020 by xi'an## the Kouign-Amann experiment

Posted in Kids, pictures, Travel with tags beurre salé, Bretagne, Breton, butter, Cambridge, Harvard University, home cooking, Kouign-Amann on June 10, 2019 by xi'an**H**aving found a recipe for Kouign-Amanns, these excessive cookies from Britanny that are essentially cooked salted butter!, I had a first try that ended up in disaster (including a deep cut on the remaining thumb) and a second try that went better as both food and body parts are concerned. (The name means cake of butter in Breton.)The underlying dough is pretty standard up to the moment it starts being profusedly buttered and layered, again and again, until it becomes sufficiently feuilleté to put in the oven. The buttery nature of the product, clearly visibly on the first picture, implies the cookies must be kept in containers like these muffin pans to preserve its shape and keep the boiling butter from inundating the oven, two aspects I had not forecasted on the first attempt.The other if minor drawback of these cookies is that they do not keep well as they contain so much butter. Bringing enough calories input for an hearty breakfast (and reminding me of those I ate in Cambridge while last visiting Pierre).

## nested sampling when prior and likelihood clash

Posted in Books, Statistics with tags Cam river, Cambridge, conflicting prior, efficiency measures, efficient importance sampling, intractable constant, marginal likelihood, nested sampling, statistical evidence, tempering on April 3, 2018 by xi'an**A** recent arXival by Chen, Hobson, Das, and Gelderblom makes the proposal of a new nested sampling implementation when prior and likelihood disagree, making simulations from the prior inefficient. The paper holds the position that a single given prior is used over and over all datasets that come along:

“…in applications where one wishes to perform analyses on many thousands (or even millions) of different datasets, since those (typically few) datasets for which the prior is unrepresentative can absorb a large fraction of the computational resources.” Chen et al., 2018

My reaction to this situation, provided (a) I want to implement nested sampling and (b) I realise there is a discrepancy, would be to resort to an importance sampling resolution, as we proposed in our Biometrika paper with Nicolas. Since one objection [from the authors] is that identifying outlier datasets is complicated (it should not be when the likelihood function can be computed) and time-consuming, sequential importance sampling could be implemented.

“The posterior repartitioning (PR) method takes advantage of the fact that nested sampling makes use of the likelihood L(θ) and prior π(θ) separately in its exploration of the parameter space, in contrast to Markov chain Monte Carlo (MCMC) sampling methods or genetic algorithms which typically deal solely in terms of the product.” Chen et al., 2018

The above salesman line does not ring a particularly convincing chime in that nested sampling is about as myopic as MCMC since based on the similar notion of a local proposal move, starting from the lowest likelihood argument (the minimum likelihood estimator!) in the nested sample.

“The advantage of this extension is that one can choose (π’,L’) so that simulating from π’ under the constraint L'(θ) > l is easier than simulating from π under the constraint L(θ) > l. For instance, one may choose an instrumental prior π’ such that Markov chain Monte Carlo steps adapted to the instrumental constrained prior are easier to implement than with respect to the actual constrained prior. In a similar vein, nested importance sampling facilitates contemplating several priors at once, as one may compute the evidence for each prior by producing the same nested sequence, based on the same pair (π’,L’), and by simply modifying the weight function.” Chopin & Robert, 2010

Since the authors propose to switch to a product (π’,L’) such that π’.L’=π.L, the solution appears like a special case of importance sampling, with the added drwaback that when π’ is not normalised, its normalised constant must be estimated as well. (With an extra nested sampling implementation?) Furthermore, the advocated solution is to use tempering, which is not so obvious as it seems in small dimensions. As the mass does not always diffuse to relevant parts of the space. A more “natural” tempering would be to use a subsample in the (sub)likelihood for nested sampling and keep the remainder of the sample for weighting the evaluation of the evidence.

## positions in North-East America

Posted in Kids, pictures, Statistics, Travel, University life with tags academic position, Cambridge, Canada, Harvard University, Massachusetts, professorship, Québec, Université de Montréal, USA on September 14, 2017 by xi'an**T**oday I received emails about openings in both Université de Montréal, Canada, and Harvard University, USA:

- Professor in Statistics, Biostatistics or Data Science at U de M, deadline October 30th, 2017, a requirement being proficiency in the French language;
- Tenure-Track Professorship in Statistics at Harvard University, Department of Statistics, details there.