How can one validate the outcome of a validation model? Or can we even imagine validation of this outcome? This was the starting question for the conference I attended in Hannover. Which obviously engaged me to the utmost. Relating to some past experiences like advising a student working on accelerated tests for fighter electronics. And failing to agree with him on validating a model to turn those accelerated tests within a realistic setting. Or reviewing this book on climate simulation three years ago while visiting Monash University. Since I discuss in details below most talks of the day, here is an opportunity to opt away! Continue reading
Archive for conference
- Bayesian Survival Model Based on Moment Characterization by Arbel, Julyan et al.
- A New Finite Approximation for the NGG Mixture Model: An Application to Density Estimation by Bianchini, Ilaria
- Distributed Estimation of Mixture Model by Dedecius, Kamil et al.
- Jeffreys’ Priors for Mixture Estimation by Grazian, Clara and X
- A Subordinated Stochastic Process Model by Palacios, Ana Paula et al.
- Bayesian Variable Selection for Generalized Linear Models Using the Power-Conditional-Expected-Posterior Prior by Perrakis, Konstantinos et al.
- Application of Interweaving in DLMs to an Exchange and Specialization Experiment by Simpson, Matthew
- On Bayesian Based Adaptive Confidence Sets for Linear Functionals by Szabó, Botond
- Identifying the Infectious Period Distribution for Stochastic Epidemic Models Using the Posterior Predictive Check by Alharthi, Muteb et al.
- A New Strategy for Testing Cosmology with Simulations by Killedar, Madhura et al.
- Formal and Heuristic Model Averaging Methods for Predicting the US Unemployment Rate by Kolly, Jeremy
- Bayesian Estimation of the Aortic Stiffness based on Non-invasive Computed Tomography Images by Lanzarone, Ettore et al.
- Bayesian Filtering for Thermal Conductivity Estimation Given Temperature Observations by Martín-Fernández, Laura et al.
- A Mixture Model for Filtering Firms’ Profit Rates by Scharfenaker, Ellis et al.
The next Nordic-Baltic Biometric conference will take place in Reykjavik, next June, a few days after the O-Bayes 15 meeting in València. I will attend the conference as the organisers were kind enough to invite me to give a talk, with high hopes to take a few days off to go hiking day and night! The registration is now open, as is the call for abstracts.
So today was the NIPS 2014 workshop, “ABC in Montréal“, which started with a fantastic talk by Juliane Liepe on some exciting applications of ABC to the migration of immune cells, with the analysis of movies involving those cells acting to heal a damaged fly wing and a cut fish tail. Quite amazing videos, really. (With the great entry line of ‘We have all cut a finger at some point in our lives’!) The statistical model behind those movies was a random walk on a grid, with different drift and bias features that served as model characteristics. Frank Wood managed to deliver his talk despite a severe case of food poisoning, with a great illustration of probabilistic programming that made me understand (at last!) the very idea of probabilistic programming. And Vikash Mansinghka presented some applications in image analysis. Those two talks led me to realise why probabilistic programming was so close to ABC, with a programming touch! Hence why I was invited to talk today! Then Dennis Prangle exposed his latest version of lazy ABC, that I have already commented on the ‘Og, somewhat connected with our delayed acceptance algorithm, to the point that maybe something common can stem out of the two notions. Michael Blum ended the day with provocative answers to the provocative question of Ted Meeds as to whether or not machine learning needed ABC (Ans. No!) and whether or not machine learning could help ABC (Ans. ???). With an happily mix-up between mechanistic and phenomenological models that helped generating discussion from the floor.
The posters were also of much interest, with calibration as a distance measure by Michael Guttman, in continuation of the poster he gave at MCMski, Aaron Smith presenting his work with Luke Bornn, Natesh Pillai and Dawn Woodard, on why a single pseudo-sample is enough for ABC efficiency. This gave me the opportunity to discuss with him the apparent contradiction with the result of Kryz Łatunsziński and Anthony Lee about the geometric convergence of ABC-MCMC only attained with a random number of pseudo-samples… And to wonder if there is a geometric versus binomial dilemma in this setting, Namely, whether or not simulating pseudo-samples until one is accepted would be more efficient than just running one and discarding it in case it is too far. So, although the audience was not that large (when compared with the other “ABC in…” and when considering the 2500+ attendees at NIPS over the week!), it was a great day where I learned a lot, did not have a doze during talks (!), [and even had an epiphany of sorts at the treadmill when I realised I just had to take longer steps to reach 16km/h without hyperventilating!] So thanks to my fellow organisers, Neil D Lawrence, Ted Meeds, Max Welling, and Richard Wilkinson for setting the program of that day! And, by the way, where’s the next “ABC in…”?! (Finland, maybe?)
reflections on the probability space induced by moment conditions with implications for Bayesian Inference [slides]Posted in Books, Statistics, University life with tags ABC, Arnold Zellner, Christian Gouriéroux, conference, empirical likelihood, fiducial distribution, measure theory, method of moments, Paris, R.A. Fisher, slides, structural model, Université Paris Dauphine on December 4, 2014 by xi'an
Here are the slides of my incoming discussion of Ron Gallant’s paper, tomorrow.
Great poster session yesterday night and at lunch today. Saw an ABC poster (by Dennis Prangle, following our random forest paper) and several MCMC posters (by Marco Banterle, who actually won one of the speed-meeting mini-project awards!, Michael Betancourt, Anne-Marie Lyne, Murray Pollock), and then a rather different poster on Mondrian forests, that generalise random forests to sequential data (by Balaji Lakshminarayanan). The talks all had interesting aspects or glimpses about big data and some of the unnecessary hype about it (them?!), along with exposing the nefarious views of Amazon to become the Earth only seller!, but I particularly enjoyed the astronomy afternoon and even more particularly Steve Roberts sweep through astronomy machine-learning. Steve characterised variational Bayes as picking your choice of sufficient statistics, which made me wonder why there were no stronger connections between variational Bayes and ABC. He also quoted the book The Fourth Paradigm: Data-Intensive Scientific Discovery by Tony Hey as putting forward interesting notions. (A book review for the next vacations?!) And also mentioned zooniverse, a citizens science website I was not aware of. With a Bayesian analysis of the learning curve of those annotating citizens (in the case of supernovae classification). Big deal, indeed!!!