Archive for software

BayesComp 20: call for contributed sessions!

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , on March 20, 2019 by xi'an

Just to remind readers of the incoming deadline for BayesComp sessions:

The deadline for providing a title and brief abstract that the session is April 1, 2019. Please provide the names and affiliations of the organizer and the three speakers (the organizer can be one of them). Each session lasts 90 minutes and each talk should be 30 minutes long including Q&A. Contributed sessions can also consist of tutorials on the use of novel software. Decisions will be made by April 15, 2019. Please send your proposals to Christian Robert, co-chair of the scientific committee. We look forward to seeing you at BayesComp 20!

In case you do not feel like organising a whole session by yourself, contact the ISBA section you feel affinity with and suggest it helps building this session together!

call for sessions and labs at Bay2sC0mp²⁰

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , on February 22, 2019 by xi'an

A call to all potential participants to the incoming BayesComp 2020 conference at the University of Florida in Gainesville, Florida, 7-10 January 2020, to submit proposals [to me] for contributed sessions on everything computational or training labs [to David Rossell] on a specific language or software. The deadline is April 1 and the sessions will be selected by the scientific committee, other proposals being offered the possibility to present the associated research during a poster session [which always is a lively component of the conference]. (Conversely, we reserve the possibility of a “last call” session made from particularly exciting posters on new topics.) Plenary speakers for this conference are

and the first invited sessions are already posted on the webpage of the conference. We dearly hope to attract a wide area of research interests into a as diverse as possible program, so please accept this invitation!!!

Elves to the ABC rescue!

Posted in Books, Kids, Statistics with tags , , , , , , on November 7, 2018 by xi'an

Marko Järvenpää, Michael Gutmann, Arijus Pleska, Aki Vehtari, and Pekka Marttinen have written a paper on Efficient Acquisition Rules for Model-Based Approximate Bayesian Computation soon to appear in Bayesian Analysis that gives me the right nudge to mention the ELFI software they have been contributing to for a while. Where the acronym stands for engine for likelihood-free inference. Written in Python, DAG based, and covering methods like the

  • ABC rejection sampler
  • Sequential Monte Carlo ABC sampler
  • Bayesian Optimization for Likelihood-Free Inference (BOLFI) framework
  • Bayesian Optimization (not likelihood-free)
  • No-U-Turn-Sampler (not likelihood-free)

[Warning: I did not experiment with the software! Feel free to share.]

“…little work has focused on trying to quantify the amount of uncertainty in the estimator of the ABC posterior density under the chosen modelling assumptions. This uncertainty is due to a finite computational budget to perform the inference and could be thus also called as computational uncertainty.”

The paper is about looking at the “real” ABC distribution, that is, the one resulting from a realistic perspective of a finite number of simulations and acceptances. By acquisition, the authors mean an efficient way to propose the next value of the parameter θ, towards minimising the uncertainty in the ABC density estimate. Note that this involves a loss function that must be chosen by the analyst and then available for the minimisation program. If this sounds complicated…

“…our interest is to design the evaluations to minimise the uncertainty in a quantity that itself describes the uncertainty of the parameters of a costly simulation model.”

it indeed is and it requires modelling choices. As in Guttman and Corander (2016), which was also concerned by designing the location of the learning parameters, the modelling is based here on a Gaussian process for the discrepancy between the observed and the simulated data. Which provides an estimate of the likelihood, later used for selecting the next sampling value of θ. The final ABC sample is however produced by a GP estimation of the ABC distribution.As noted by the authors, the method may prove quite time consuming: for instance, one involved model required one minute of computation time for selecting the next evaluation location. (I had a bit of a difficulty when reading the paper as I kept hitting notions that are local to the paper but not immediately or precisely defined. As “adequation function” [p.11] or “discrepancy”. Maybe correlated with short nights while staying at CIRM for the Masterclass, always waking up around 4am for unknown reasons!)

a jump back in time

Posted in Books, Kids, Statistics, Travel, University life with tags , , , , , , , , , , , on October 1, 2018 by xi'an

As the Department of Statistics in Warwick is slowly emptying its shelves and offices for the big migration to the new building that is almost completed, books and documents are abandoned in the corridors and the work spaces. On this occasion, I thus happened to spot a vintage edition of the Valencia 3 proceedings. I had missed this meeting and hence the volume for, during the last year of my PhD, I was drafted in the French Navy and as a result prohibited to travel abroad. (Although on reflection I could have safely done it with no one in the military the wiser!) Reading through the papers thirty years later is a weird experience, as I do not remember most of the papers, the exception being the mixture modelling paper by José Bernardo and Javier Giròn which I studied a few years later when writing the mixture estimation and simulation paper with Jean Diebolt. And then again in our much more recent non-informative paper with Clara Grazian.  And Prem Goel’s survey of Bayesian software. That is, 1987 state of the art software. Covering an amazing eighteen list. Including versions by Zellner, Tierney, Schervish, Smith [but no MCMC], Jaynes, Goldstein, Geweke, van Dijk, Bauwens, which apparently did not survive the ages till now. Most were in Fortran but S was also mentioned. And another version of Tierney, Kass and Kadane on Laplace approximations. And the reference paper of Dennis Lindley [who was already retired from UCL at that time!] on the Hardy-Weinberg equilibrium. And another paper by Don Rubin on using SIR (Rubin, 1983) for simulating from posterior distributions with missing data. Ten years before the particle filter paper, and apparently missing the possibility of weights with infinite variance.

There already were some illustrations of Bayesian analysis in action, including one by Jay Kadane reproduced in his book. And several papers by Jim Berger, Tony O’Hagan, Luis Pericchi and others on imprecise Bayesian modelling, which was in tune with the era, the imprecise probability book by Peter Walley about to appear. And a paper by Shaw on numerical integration that mentioned quasi-random methods. Applied to a 12 component Normal mixture.Overall, a much less theoretical content than I would have expected. And nothing about shrinkage estimators, although a fraction of the speakers had worked on this topic most recently.

At a less fundamental level, this was a time when LaTeX was becoming a standard, as shown by a few papers in the volume (and as I was to find when visiting Purdue the year after), even though most were still typed on a typewriter, including a manuscript addition by Dennis Lindley. And Warwick appeared as a Bayesian hotpot!, with at least five papers written by people there permanently or on a long term visit. (In case a local is interested in it, I have kept the volume, to be found in my new office!)

Bayesian workers, unite!

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , on January 19, 2018 by xi'an

This afternoon, Alexander Ly is defending his PhD thesis at the University of Amsterdam. While I cannot attend the event, I want to celebrate the event and a remarkable thesis around the Bayes factor [even though we disagree on its role!] and the Jeffreys’s Amazing Statistics Program (!), otherwise known as JASP. Plus commend the coolest thesis cover I ever saw, made by the JASP graphical designer Viktor Beekman and representing Harold Jeffreys leading empirical science workers in the best tradition of socialist realism! Alexander wrote a post on the JASP blog to describe the thesis, the cover, and his plans for the future. Congratulations!

To predict and serve?

Posted in Books, pictures, Statistics with tags , , , , , , , , , , , on October 25, 2016 by xi'an

Kristian Lum and William Isaac published a paper in Significance last week [with the above title] about predictive policing systems used in the USA and presumably in other countries to predict future crimes [and therefore prevent them]. This sounds like a good idea for a science fiction plot, à la Philip K Dick [in his short story, The Minority Report], but that it is used in real life definitely sounds frightening, especially when the civil rights of the targeted individuals are impacted. (Although some politicians in different democratic countries increasingly show increasing contempt for keeping everyone’ rights equal…) I also feel terrified by the social determinism behind the very concept of predicting crime from socio-economic data (and possibly genetic characteristics in a near future, bringing us back to the dark days of physiognomy!)

“…crimes that occur in locations frequented by police are more likely to appear in the database simply because that is where the police are patrolling.”

Kristian and William examine in this paper one statistical aspect of the police forces relying on crime prediction software, namely the bias in the data exploited by the software and in the resulting policing. (While the accountability of the police actions when induced by such software is not explored, this is obviously related to the Nature editorial of last week, “Algorithm and blues“, which [in short] calls for watchdogs on AIs and decision algorithms.) When the data is gathered from police and justice records, any bias in checks, arrests, and condemnations will be reproduced in the data and hence will repeat the bias in targeting potential criminals. As aptly put by the authors, the resulting machine learning algorithm will be “predicting future policing, not future crime.” Worse, by having no reservation about over-fitting [the more predicted crimes the better], it will increase the bias in the same direction. In the Oakland drug-user example analysed in the article, the police concentrates almost uniquely on a few grid squares of the city, resulting into the above self-predicting fallacy. However, I do not see much hope in using other surveys and datasets towards eliminating this bias, as they also carry their own shortcomings. Even without biases, predicting crimes at the individual level just seems a bad idea, for statistical and ethical reasons.

Journal of Open Source Software

Posted in Books, R, Statistics, University life with tags , , , , , , , , on October 4, 2016 by xi'an

A week ago, I received a request for refereeing a paper for the Journal of Open Source Software, which I have never seen (or heard of) before. The concept is quite interesting with a scope much broader than statistical computing (as I do not know anyone in the board and no-one there seems affiliated with a Statistics department). Papers are very terse, describing the associated code in one page or two, and the purpose of refereeing is to check the code. (I was asked to evaluate an MCMC R package but declined for lack of time.) Which is a pretty light task if the code is friendly enough to operate right away and provide demos. Best of luck to this endeavour!