**M**y friend Luke Bornn is running a workshop on Statistics in Sports the day after JSM, in Vancouver. With sessions on American football, basketball, and other sports. Not exactly in the continuation of the long summer of Briton conferences series, unless one considers the “British” in British Columbia!, but definitely worth considering as a boxing day for JSM!

## Archive for Canada

## CASSIS [in Vancouver]

Posted in pictures, Running, Statistics, Travel, University life with tags Boxing Day, Britain, British Columbia, Canada, conferences, JSM 2018, Luke Bornn, statistics and sports, Vanco, workshop on May 29, 2018 by xi'an## distributions for parameters [seminar]

Posted in Books, Statistics, University life with tags Bayesian paradigm, BFF4, Canada, CANSSI, confidence distribution, COPSS Award, fiducial inference, foundations, frequentist inference, Nancy Reid, National Academy of Science, seminar, Université Paris Dauphine, University of Toronto on January 22, 2018 by xi'an**N**ext Thursday, January 25, Nancy Reid will give a seminar in Paris-Dauphine on distributions for parameters that covers different statistical paradigms and bring a new light on the foundations of statistics. (Coffee is at 10am in the Maths department common room and the talk is at 10:15 in room A, second floor.)

Nancy Reid is University Professor of Statistical Sciences and the Canada Research Chair in Statistical Theory and Applications at the University of Toronto and internationally acclaimed statistician, as well as a 2014 Fellow of the Royal Society of Canada. In 2015, she received the Order of Canada, was elected a foreign associate of the National Academy of Sciences in 2016 and has been awarded many other prestigious statistical and science honours, including the Committee of Presidents of Statistical Societies (COPSS) Award in 1992.

Nancy Reid’s research focuses on finding more accurate and efficient methods to deduce and conclude facts from complex data sets to ultimately help scientists find specific solutions to specific problems.

There is currently some renewed interest in developing distributions for parameters, often without relying on prior probability measures. Several approaches have been proposed and discussed in the literature and in a series of “Bayes, fiducial, and frequentist” workshops and meeting sessions. Confidence distributions, generalized fiducial inference, inferential models, belief functions, are some of the terms associated with these approaches. I will survey some of this work, with particular emphasis on common elements and calibration properties. I will try to situate the discussion in the context of the current explosion of interest in big data and data science.

## positions in North-East America

Posted in Kids, pictures, Statistics, Travel, University life with tags academic position, Cambridge, Canada, Harvard University, Massachusetts, professorship, Québec, Université de Montréal, USA on September 14, 2017 by xi'an**T**oday I received emails about openings in both Université de Montréal, Canada, and Harvard University, USA:

- Professor in Statistics, Biostatistics or Data Science at U de M, deadline October 30th, 2017, a requirement being proficiency in the French language;
- Tenure-Track Professorship in Statistics at Harvard University, Department of Statistics, details there.

## Mnt Rundle [jatp]

Posted in Statistics with tags Alberta, Banff, Banff Centre, BIRS, Canada, Canadian Rockies, jatp, Mount Rundle, Tunnel Mountain, winter light on March 3, 2017 by xi'an## machine learning-based approach to likelihood-free inference

Posted in Statistics with tags 17w5025, ABC'ory, Banff, BIRS, Canada, classification, kernel density estimator, likelihood-free methods, local regression, logistic regression, machine learning, semi-automatic ABC on March 3, 2017 by xi'an**A**t ABC’ory last week, Kyle Cranmer gave an extended talk on estimating the likelihood ratio by classification tools. Connected with a 2015 arXival. The idea is that the likelihood ratio is invariant by a transform s(.) that is monotonic with the likelihood ratio itself. It took me a few minutes (after the talk) to understand what this meant. Because it is a transform that actually depends on the parameter values in the denominator and the numerator of the ratio. For instance the ratio itself is a proper transform in the sense that the likelihood ratio based on the distribution of the likelihood ratio under both parameter values is the same as the original likelihood ratio. Or the (naïve Bayes) probability version of the likelihood ratio. Which reminds me of the invariance in Fearnhead and Prangle (2012) of the Bayes estimate given x and of the Bayes estimate given the Bayes estimate. I also feel there is a connection with Geyer’s logistic regression estimate of normalising constants mentioned several times on the ‘Og. (The paper mentions in the conclusion the connection with this problem.)

Now, back to the paper (which I read the night after the talk to get a global perspective on the approach), the ratio is of course unknown and the implementation therein is to estimate it by a classification method. Estimating thus the probability for a given x to be from one versus the other distribution. Once this estimate is produced, its distributions under both values of the parameter can be estimated by density estimation, hence an estimated likelihood ratio be produced. With better prospects since this is a one-dimensional quantity. An objection to this derivation is that it intrinsically depends on the pair of parameters θ¹ and θ² used therein. Changing to another pair requires a new ratio, new simulations, and new density estimations. When moving to a continuous collection of parameter values, in a classical setting, the likelihood ratio involves two maxima, which can be formally represented in (3.3) as a maximum over a likelihood ratio based on the estimated densities of likelihood ratios, except that each evaluation of this ratio seems to require another simulation. (Which makes the comparison with ABC more complex than presented in the paper [p.18], since ABC major computational hurdle lies in the production of the reference table and to a lesser degree of the local regression, both items that can be recycled for any new dataset.) A smoothing step is then to include the pair of parameters θ¹ and θ² as further inputs of the classifier. There still remains the computational burden of simulating enough values of s(x) towards estimating its density for every new value of θ¹ and θ². And while the projection from x to s(x) does effectively reduce the dimension of the problem to one, the method still aims at estimating with some degree of precision the density of x, so cannot escape the curse of dimensionality. The sleight of hand resides in the classification step, since it is equivalent to estimating the likelihood ratio. I thus fail to understand how and why a poor classifier can then lead to a good approximations of the likelihood ratio “obtained by calibrating s(x)” (p.16). Where calibrating means estimating the density.