Archive for indoor swimming

O’Bayes 19/1 [snapshots]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , on June 30, 2019 by xi'an

Although the tutorials of O’Bayes 2019 of yesterday were poorly attended, albeit them being great entries into objective Bayesian model choice, recent advances in MCMC methodology, and the multiple layers of BART, for which I have to blame myself for sticking the beginning of O’Bayes too closely to the end of BNP as only the most dedicated could achieve the commuting from Oxford to Coventry to reach Warwick in time, the first day of talks were well attended, despite weekend commitments, conference fatigue, and perfect summer weather! Here are some snapshots from my bench (and apologies for not covering better the more theoretical talks I had trouble to follow, due to an early and intense morning swimming lesson! Like Steve Walker’s utility based derivation of priors that generalise maximum entropy priors. But being entirely independent from the model does not sound to me like such a desirable feature… And Natalia Bochkina’s Bernstein-von Mises theorem for a location scale semi-parametric model, including a clever construct of a mixture of two Dirichlet priors to achieve proper convergence.)

Jim Berger started the day with a talk on imprecise probabilities, involving the society for imprecise probability, which I discovered while reading Keynes’ book, with a neat resolution of the Jeffreys-Lindley paradox, when re-expressing the null as an imprecise null, with the posterior of the null no longer converging to one, with a limit depending on the prior modelling, if involving a prior on the bias as well, with Chris discussing the talk and mentioning a recent work with Edwin Fong on reinterpreting marginal likelihood as exhaustive X validation, summing over all possible subsets of the data [using log marginal predictive].Håvard Rue did a follow-up talk from his Valencià O’Bayes 2015 talk on PC-priors. With a pretty hilarious introduction on his difficulties with constructing priors and counseling students about their Bayesian modelling. With a list of principles and desiderata to define a reference prior. However, I somewhat disagree with his argument that the Kullback-Leibler distance from the simpler (base) model cannot be scaled, as it is essentially a log-likelihood. And it feels like multivariate parameters need some sort of separability to define distance(s) to the base model since the distance somewhat summarises the whole departure from the simpler model. (Håvard also joined my achievement of putting an ostrich in a slide!) In his discussion, Robin Ryder made a very pragmatic recap on the difficulties with constructing priors. And pointing out a natural link with ABC (which brings us back to Don Rubin’s motivation for introducing the algorithm as a formal thought experiment).

Sara Wade gave the final talk on the day about her work on Bayesian cluster analysis. Which discussion in Bayesian Analysis I alas missed. Cluster estimation, as mentioned frequently on this blog, is a rather frustrating challenge despite the simple formulation of the problem. (And I will not mention Larry’s tequila analogy!) The current approach is based on loss functions directly addressing the clustering aspect, integrating out the parameters. Which produces the interesting notion of neighbourhoods of partitions and hence credible balls in the space of partitions. It still remains unclear to me that cluster estimation is at all achievable, since the partition space explodes with the sample size and hence makes the most probable cluster more and more unlikely in that space. Somewhat paradoxically, the paper concludes that estimating the cluster produces a more reliable estimator on the number of clusters than looking at the marginal distribution on this number. In her discussion, Clara Grazian also pointed the ambivalent use of clustering, where the intended meaning somehow diverges from the meaning induced by the mixture model.

pool etiquette [and lane rage]

Posted in Statistics with tags , , , , , , , , , on March 15, 2019 by xi'an

A funny entry in The Guardian of today about what turns swimmers mad at the pool. A form (foam?) of road-rage in the water… Since I have started a daily swim since mid-December to compensate for my not-running for an indeterminate length of time, I can primarily if irrationally relate to the reactions reported in the article. About the pain of passing other swimmers and being brushed or kicked by faster runners oops swimmers trying to squeeze in the middle (of nowhere). Irrationally so because at  a rational level there is nowhere to go really, except the end of the lane and back, which means waiting or turning back earlier not much of an imposition. But still feeling a sort of “road rage” when I cannot turn back and start again without delay… I have been thinking for the past weeks (while going back and forth, back and forth, dozens of times) of ways to rationalize the whole operation but cannot see a way to make all swimmers go exactly the same speed in a given lane, if only because most swimmers switch stroke between lengths. Except me as I can only and barely handle the breast stroke, thanks to lessons from Nick!, stroke than many seem to resent. To the point of calling for breast-stroke free lanes… Rationally, I think the problem is the same with every activity involving moving at different relative speeds on a busy lane. Runners get annoyed at breaking their pace, cyclists at braking or worse!, touching ground. It is just more concentrated in a 25m swimming lane on a busy day. (Which is why I really try to optimise my visits to the pool to be in the early morning or in the mid-afternoon. And again and again promise myself to skip the dreadful Sunday morning session!) L’enfer, c’est les autres, especially when they swim at a different pace!