Archive for University of Glasgow
Mike Titterington [in memoriam²]
Posted in pictures, Statistics, University life with tags Mike Titterington, University of Glasgow on April 19, 2023 by xi'anirreverent Mike [in memoriam]
Posted in Books, Kids, pictures, University life with tags Bulletin in Applied Statistics, consensus prior, Mike Titterington, non-informative priors, Thomas Bayes, University of Glasgow, vague priors on April 17, 2023 by xi'anWhile I could not find an on-line picture of Mike Titterington, another testimony to his modesty and selflessness, I remembered this series of sketches on priors he made for the Bulletin in Applied Statistics in 1982, under the title Irreverent Bayes!
Mike Titterington (1945-2023)
Posted in Books, Kids, pictures, Travel, University life with tags Biometrika, Edinburgh, editor, finite mixtures, Glasgow, memories, Mike Titterington, obituary, Scotland, University of Glasgow on April 14, 2023 by xi'an
Most sadly, I just heard from Glasgow that my friend and coauthor Mike Titterington passed away last weekend. While a significant figure in the field and a precursor in many ways, from mixtures to machine learning, Mike was one of the kindest persons ever, tolerant to a fault and generous with his time, and I enjoyed very much my yearly visits to Glasgow to work with him (and elope to the hills). This was also the time he was the (sole) editor of Biometrika and to this day I remain amazed at the amount of effort he dedicated to it, annotating every single accepted paper with his red pen during his morning bus commute and having the edited copy mailed to the author(s). The last time I saw him was in October 2019, when I was visiting the University of Edinburgh and the newly created Bayes Centre, and he came to meet me for an afternoon tea, despite being in poor health… Thank you for all these years, Mike!
Poisson-Belgium 0-0
Posted in Statistics with tags Belgium, Brazil, data-analytics, Denmark, FIFA, football World Cup, Galway, Germany, Glasgow Celtics, Glasgow Rangers, Ireland, Mexico, Nature, prediction, Scotland, Switzerland, University of Glasgow, University of Oxford, Uruguay, Warwick Mathematics Institute on December 5, 2022 by xi'an“Statistical match predictions are more accurate than many people realize (…) For the upcoming Qatar World Cup, Penn’s model suggests that Belgium (…) has the highest chances of raising the famous trophy, followed by Brazil”
Even Nature had to get entries on the current football World cup, with a paper on data-analytics reaching football coaches and teams. This is not exactly prime news, as I remember visiting the Department of Statistics of the University of Glasgow in the mid 1990’s and chatting with a very friendly doctoral student who was consulting for the Glasgow Rangers (or Celtics?!) on the side at the time. And went back to Ireland to continue with a local team (Galway?!).
The paper reports on different modellings, including one double-Poisson model by (PhD) Matthew Penn from Oxford and (maths undergraduate) Joanna Marks from Warwick, which presumably resemble the double-Poisson version set by Leonardo Egidi et al. and posted on Andrews’ blog a few days ago. Following an earlier model by my friends Karlis & Ntzoufras in 2003. While predictive models can obviously fail, this attempt is missing Belgium, Germany, Switzerland, Mexico, Uruguay, and Denmark early elimination from the cup. One possible reason imho is that national teams do not play that often when players are employed by different clubs in many counties, hence are hard to assess, but I cannot claim any expertise or interest in the game.
Finite mixture models do not reliably learn the number of components
Posted in Books, Statistics, University life with tags Bayes factor, David McKay, Edinburgh, empirical Bayes methods, finite mixtures, ICML 2021, infinite mixture, Mike Titterington, Padova, posterior concentration, Radford Neal, Scotland, Università degli studi di Padova, University of Glasgow on October 15, 2022 by xi'anWhen preparing my talk for Padova, I found that Diana Cai, Trevor Campbell, and Tamara Broderick wrote this ICML / PLMR paper last year on the impossible estimation of the number of components in a mixture.
“A natural check on a Bayesian mixture analysis is to establish that the Bayesian posterior on the number of components increasingly concentrates near the truth as the number of data points becomes arbitrarily large.” Cai, Campbell & Broderick (2021)
Which seems to contradict [my formerly-Glaswegian friend] Agostino Nobile who showed in his thesis that the posterior on the number of components does concentrate at the true number of components, provided the prior contains that number in its support. As well as numerous papers on the consistency of the Bayes factor, including the one against an infinite mixture alternative, as we discussed in our recent paper with Adrien and Judith. And reminded me of the rebuke I got in 2001 from the late David McKay when mentioning that I did not believe in estimating the number of components, both because of the impact of the prior modelling and of the tendency of the data to push for more clusters as the sample size increased. (This was a most lively workshop Mike Titterington and I organised at ICMS in Edinburgh, where Radford Neal also delivered an impromptu talk to argue against using the Galaxy dataset as a benchmark!)
“In principle, the Bayes factor for the MFM versus the DPM could be used as an empirical criterion for choosing between the two models, and in fact, it is quite easy to compute an approximation to the Bayes factor using importance sampling” Miller & Harrison (2018)
This is however a point made in Miller & Harrison (2018) that the estimation of k logically goes south if the data is not from the assumed mixture model. In this paper, Cai et al. demonstrate that the posterior diverges, even when it depends on the sample size. Or even the sample as in empirical Bayes solutions.