Archive for Society for Imprecise Probability

O’Bayes 19/1 [snapshots]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , on June 30, 2019 by xi'an

Although the tutorials of O’Bayes 2019 of yesterday were poorly attended, albeit them being great entries into objective Bayesian model choice, recent advances in MCMC methodology, and the multiple layers of BART, for which I have to blame myself for sticking the beginning of O’Bayes too closely to the end of BNP as only the most dedicated could achieve the commuting from Oxford to Coventry to reach Warwick in time, the first day of talks were well attended, despite weekend commitments, conference fatigue, and perfect summer weather! Here are some snapshots from my bench (and apologies for not covering better the more theoretical talks I had trouble to follow, due to an early and intense morning swimming lesson! Like Steve Walker’s utility based derivation of priors that generalise maximum entropy priors. But being entirely independent from the model does not sound to me like such a desirable feature… And Natalia Bochkina’s Bernstein-von Mises theorem for a location scale semi-parametric model, including a clever construct of a mixture of two Dirichlet priors to achieve proper convergence.)

Jim Berger started the day with a talk on imprecise probabilities, involving the society for imprecise probability, which I discovered while reading Keynes’ book, with a neat resolution of the Jeffreys-Lindley paradox, when re-expressing the null as an imprecise null, with the posterior of the null no longer converging to one, with a limit depending on the prior modelling, if involving a prior on the bias as well, with Chris discussing the talk and mentioning a recent work with Edwin Fong on reinterpreting marginal likelihood as exhaustive X validation, summing over all possible subsets of the data [using log marginal predictive].Håvard Rue did a follow-up talk from his Valencià O’Bayes 2015 talk on PC-priors. With a pretty hilarious introduction on his difficulties with constructing priors and counseling students about their Bayesian modelling. With a list of principles and desiderata to define a reference prior. However, I somewhat disagree with his argument that the Kullback-Leibler distance from the simpler (base) model cannot be scaled, as it is essentially a log-likelihood. And it feels like multivariate parameters need some sort of separability to define distance(s) to the base model since the distance somewhat summarises the whole departure from the simpler model. (Håvard also joined my achievement of putting an ostrich in a slide!) In his discussion, Robin Ryder made a very pragmatic recap on the difficulties with constructing priors. And pointing out a natural link with ABC (which brings us back to Don Rubin’s motivation for introducing the algorithm as a formal thought experiment).

Sara Wade gave the final talk on the day about her work on Bayesian cluster analysis. Which discussion in Bayesian Analysis I alas missed. Cluster estimation, as mentioned frequently on this blog, is a rather frustrating challenge despite the simple formulation of the problem. (And I will not mention Larry’s tequila analogy!) The current approach is based on loss functions directly addressing the clustering aspect, integrating out the parameters. Which produces the interesting notion of neighbourhoods of partitions and hence credible balls in the space of partitions. It still remains unclear to me that cluster estimation is at all achievable, since the partition space explodes with the sample size and hence makes the most probable cluster more and more unlikely in that space. Somewhat paradoxically, the paper concludes that estimating the cluster produces a more reliable estimator on the number of clusters than looking at the marginal distribution on this number. In her discussion, Clara Grazian also pointed the ambivalent use of clustering, where the intended meaning somehow diverges from the meaning induced by the mixture model.

can we trust computer simulations?

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , on July 10, 2015 by xi'an

lion

How can one validate the outcome of a validation model? Or can we even imagine validation of this outcome? This was the starting question for the conference I attended in Hannover. Which obviously engaged me to the utmost. Relating to some past experiences like advising a student working on accelerated tests for fighter electronics. And failing to agree with him on validating a model to turn those accelerated tests within a realistic setting. Or reviewing this book on climate simulation three years ago while visiting Monash University. Since I discuss in details below most talks of the day, here is an opportunity to opt away! Continue reading

Keynes and the Society for imprecise probability

Posted in Books, Statistics with tags , , on March 30, 2010 by xi'an

When completing my comments on Keynes’ A Treatise On Probability, thanks to an Og’s reader, I found that Keynes is held in high esteem (as a probabilist) by the members of the Society for Imprecise Probability. The goals of the society are set as

The Society for Imprecise Probability: Theories and Applications (SIPTA) was created in February 2002, with the aim of promoting the research on imprecise probability. This is done through a series of activities for bringing together researchers from different groups, creating resources for information, dissemination and documentation, and making other people aware of the potential of imprecise probability models.

The Society has its roots in the Imprecise Probabilities Project conceived in 1996 by Peter Walley and Gert de Cooman and its creation has been encouraged by the success of the ISIPTA conferences.

Imprecise probability is understood in a very wide sense. It is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings, …) and quantitative modes (interval probabilities, belief functions, upper and lower previsions, …). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete.

The society sees J.M. Keynes as a precursor of the Dempster-Schafer perspective on probability, whose Bayesian version is represented in Peter Walley’s book, Statistical Reasoning with Imprecise Probabilities, due to the mention in Keynes’ A Treatise On Probability thanks to the remark made by Keynes (Chapter XV) that “many probabilities can be placed between numerical limits”. Given that the book does not extrapolate on how to take advantage of this generalisation of probabilities, but instead sees it as an impediment to probabilise the parameter space, I would think this remark is more representative of the general confusion made between true (i.e. model related) probabilities and their (observation based) estimates.