Archive for proceedings

Florid’AISTATS

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , on August 31, 2016 by xi'an

The next AISTATS conference is taking place in Florida, Fort Lauderdale, on April 20-22. (The website keeps the same address one conference after another, which means all my links to the AISTATS 2016 conference in Cadiz are no longer valid. And that the above sunset from Florida is named… cadiz.jpg!) The deadline for paper submission is October 13 and there are two novel features:

  1. Fast-track for Electronic Journal of Statistics: Authors of a small number of accepted papers will be invited to submit an extended version for fast-track publication in a special issue of the Electronic Journal of Statistics (EJS) after the AISTATS decisions are out. Details on how to prepare such extended journal paper submission will be announced after the AISTATS decisions.
  2. Review-sharing with NIPS: Papers previously submitted to NIPS 2016 are required to declare their previous NIPS paper ID, and optionally supply a one-page letter of revision (similar to a revision letter to journal editors; anonymized) in supplemental materials. AISTATS reviewers will have access to the previous anonymous NIPS reviews. Other than this, all submissions will be treated equally.

I find both initiatives worth applauding and replicating in other machine-learning conferences. Particularly in regard with the recent debate we had at Annals of Statistics.

what to do with refereed conference proceedings?

Posted in Books, Statistics, University life with tags , , , , , , on August 8, 2016 by xi'an

In the recent days, we have had a lively discussion among AEs of the Annals of Statistics, as to whether or not set up a policy regarding publications of documents that have already been published in a shortened (8 pages) version in a machine learning conference like NIPS. Or AISTATS. While I obviously cannot disclose details here, the debate is quite interesting and may bring the machine learning and statistics communities closer if resolved in a certain way. My own and personal opinion on that matter is that what matters most is what’s best for Annals of Statistics rather than the authors’ tenure or the different standards in the machine learning community. If the submitted paper is based on a brilliant and novel idea that can appeal to a sufficiently wide part of the readership and if the maths support of that idea is strong enough, we should publish the paper. Whether or not an eight-page preliminary version has been previously published in a conference proceeding like NIPS does not seem particularly relevant to me, as I find those short papers mostly unreadable and hence do not read them. Since Annals of Statistics runs an anti-plagiarism software that is most likely efficient, blatant cases of duplications could be avoided. Of course, this does not solve all issues and papers with similar contents can and will end up being published. However, this is also the case for statistics journals and statistics, in the sense that brilliant ideas sometimes end up being split between two or three major journals.

AISTATS 2016 [programme]

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , on March 14, 2016 by xi'an

The full programme for AISTATS 2016 in Cádiz is now on-line, including the posters (except for the additional posters by MLSS participants). Richard Samworth is scheduled to talk on Monday morning, May 9, Kamalika Chaudhuri on Tuesday morning, May 10, and Adam Tauman Kalai  on Wednesday morning, May 11. As at the previous AISTATS meeting, poster sessions are central to the day, while evenings are free (which shows this is not a Bayesian meeting!!!). See you in Cádiz, hopefully! (Registration is still open, just in case.)

AISTATS 2016 [post-decisions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on December 27, 2015 by xi'an

Now that the (extended) deadline for AISTATS 2016 decisions is gone, I can gladly report that out of 594 submissions, we accepted 165 papers, including 35 oral presentations. As reported in the previous blog post, I remain amazed at the gruesome efficiency of the processing machinery and at the overwhelmingly intense involvement of the various actors who handled those submissions. And at the help brought by the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. I clearly was not as active and responsive as many of those actors and definitely not [and by far] as my co-program-chair, Arthur Gretton, who deserves all the praise for achieving a final decision by the end of the year. We have already received a few complaints from rejected authors, but this is to be expected with a rejection rate of 73%. (More annoying were the emails asking for our decisions in the very final days…) An amazing and humbling experience for me, truly! See you in Cadiz, hopefully.

AISTATS 2016 [post-submissions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 22, 2015 by xi'an

Now that the deadline for AISTATS 2016 submissions is past, I can gladly report that we got the amazing number of 559 submissions, which is much more than what was submitted to the previous AISTATS conferences. To the point it made us fear for a little while [but not any longer!] that the conference room was not large enough. And hope that we had to install video connections in the hotel bar!

Which also means handling about the same amount of papers as a year of JRSS B submissions within a single month!, the way those submissions are handled for the AISTATS 2016 conference proceedings. The process is indeed [as in other machine learning conferences] to allocate papers to associate editors [or meta-reviewers or area chairs] with a bunch of papers and then have those AEs allocate papers to reviewers, all this within a few days, as the reviews have to be returned to authors within a month, for November 16 to be precise. This sounds like a daunting task but it proceeded rather smoothly due to a high degree of automation (this is machine-learning, after all!) in processing those papers, thanks to (a) the immediate response to the large majority of AEs and reviewers involved, who bid on the papers that were of most interest to them, and (b) a computer program called the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. Which tremendously helps with managing about everything! Even when accounting for the more formatted entries in such proceedings (with an 8 page limit) and the call to the conference participants for reviewing other papers, I remain amazed at the resulting difference in the time scales for handling papers in the fields of statistics and machine-learning. (There was a short lived attempt to replicate this type of processing for the Annals of Statistics, if I remember well.)

Bayesian statistics from methods to models and applications

Posted in Books, Kids, pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , on July 5, 2015 by xi'an

A Springer book published in conjunction with the great BAYSM 2014 conference in Wien last year has now appeared. Here is the table of contents:

  • Bayesian Survival Model Based on Moment Characterization by Arbel, Julyan et al.
  • A New Finite Approximation for the NGG Mixture Model: An Application to Density Estimation by Bianchini, Ilaria
  • Distributed Estimation of Mixture Model by Dedecius, Kamil et al.
  • Jeffreys’ Priors for Mixture Estimation by Grazian, Clara and X
  • A Subordinated Stochastic Process Model by Palacios, Ana Paula et al.
  • Bayesian Variable Selection for Generalized Linear Models Using the Power-Conditional-Expected-Posterior Prior by Perrakis, Konstantinos et al.
  • Application of Interweaving in DLMs to an Exchange and Specialization Experiment by Simpson, Matthew
  • On Bayesian Based Adaptive Confidence Sets for Linear Functionals by Szabó, Botond
  • Identifying the Infectious Period Distribution for Stochastic Epidemic Models Using the Posterior Predictive Check by Alharthi, Muteb et al.
  • A New Strategy for Testing Cosmology with Simulations by Killedar, Madhura et al.
  • Formal and Heuristic Model Averaging Methods for Predicting the US Unemployment Rate by Kolly, Jeremy
  • Bayesian Estimation of the Aortic Stiffness based on Non-invasive Computed Tomography Images by Lanzarone, Ettore et al.
  • Bayesian Filtering for Thermal Conductivity Estimation Given Temperature Observations by Martín-Fernández, Laura et al.
  • A Mixture Model for Filtering Firms’ Profit Rates by Scharfenaker, Ellis et al.

Enjoy!

parallelising MCMC algorithms

Posted in Books, Statistics, University life with tags , , , on December 23, 2014 by xi'an

This paper, A general construction for parallelizing Metropolis-Hastings algorithms, written by Ben Calderhead, was first presented at MCMSki last January and has now appeared in PNAS. It is somewhat related to the recycling idea of Tjelmeland (2004, unpublished) and hence to our 1996 Rao-Blackwellisation paper with George. Although there is no recycling herein.

At each iteration of Ben’s algorithm, N proposed values are generated conditional on the “current” value of the Markov chain, which actually consists of (N+1) components and from which one component is drawn at random to serve as a seed for the next proposal distribution and the simulation of N other values. In short, this is a data-augmentation scheme with the index I on the one side and the N modified components on the other side. The neat trick in the proposal [and the reason for the jump in efficiency] is that the stationary distribution of the auxiliary variable can be determined and hence used (N+1) times in updating the vector of (N+1) components. (Note that picking the index at random means computing all (N+1) possible transitions from one component to the N others. Or even all (N+1)! if the proposals differ. Hence a potential increase in the computing cost, even though what costs the most is usually the likelihood computation, dispatched on the parallel processors.) While there are (N+1) terms involved at each step, the genuine Markov chain is truly over a single chain and the N other proposed values are not recycled. Even though they could be [for Monte Carlo integration purposes], as shown e.g. in our paper with Pierre Jacob and Murray Smith. Something that took a few iterations for me to understand is why Ben rephrases the original Metropolis-Hastings algorithm as a finite state space Markov chain on the set of indices {1,…,N+1} (Proposition 1). Conditionally on the values of the (N+1) vector, the stationary of that sub-chain is no longer uniform. Hence, picking (N+1) indices from the stationary helps in selecting the most appropriate images, which explains why the rejection rate decreases.

The paper indeed evaluates the impact of increasing the number of proposals in terms of effective sample size (ESS), acceptance rate, and mean squared jump distance, based two examples. As often in parallel implementations, the paper suggests an “N-fold increase in computational speed” even though this is simply the effect of running the same algorithm on a single processor and on N parallel processors. If the comparison is between a single proposal Metropolis-Hastings algorithm on a single processor and an N-fold proposal on N processors, I would say the latter is slower because of the selection of the index I that forces all pairs of reverse move.  Nonetheless, since this is an almost free bonus resulting from using N processors, when compared with more complex coupled chains, it sounds worth investigating and comparing with those more complex parallel schemes.