Archive for machine learning

ABC model choice by random forests

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , on June 25, 2014 by xi'an

treerise6After more than a year of collaboration, meetings, simulations, delays, switches,  visits, more delays, more simulations, discussions, and a final marathon wrapping day last Friday, Jean-Michel Marin, Pierre Pudlo,  and I at last completed our latest collaboration on ABC, with the central arguments that (a) using random forests is a good tool for choosing the most appropriate model and (b) evaluating the posterior misclassification error rather than the posterior probability of a model is an appropriate paradigm shift. The paper has been co-signed with our population genetics colleagues, Jean-Marie Cornuet and Arnaud Estoup, as they provided helpful advice on the tools and on the genetic illustrations and as they plan to include those new tools in their future analyses and DIYABC software.  ABC model choice via random forests is now arXived and very soon to be submitted…

truePPOne scientific reason for this fairly long conception is that it took us several iterations to understand the intrinsic nature of the random forest tool and how it could be most naturally embedded in ABC schemes. We first imagined it as a filter from a set of summary statistics to a subset of significant statistics (hence the automated ABC advertised in some of my past or future talks!), with the additional appeal of an associated distance induced by the forest. However, we later realised that (a) further ABC steps were counterproductive once the model was selected by the random forest and (b) including more summary statistics was always beneficial to the performances of the forest and (c) the connections between (i) the true posterior probability of a model, (ii) the ABC version of this probability, (iii) the random forest version of the above, were at best very loose. The above picture is taken from the paper: it shows how the true and the ABC probabilities (do not) relate in the example of an MA(q) model… We thus had another round of discussions and experiments before deciding the unthinkable, namely to give up the attempts to approximate the posterior probability in this setting and to come up with another assessment of the uncertainty associated with the decision. This led us to propose to compute a posterior predictive error as the error assessment for ABC model choice. This is mostly a classification error but (a) it is based on the ABC posterior distribution rather than on the prior and (b) it does not require extra-computations when compared with other empirical measures such as cross-validation, while avoiding the sin of using the data twice!

last Big MC [seminar] before summer [June 19, 3pm]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , on June 17, 2014 by xi'an

crossing Rue Soufflot on my way to IHP from Vieux Campeur, March 28, 2013Last session of our Big’MC seminar at Institut Henri Poincaré this year, on Tuesday Thursday, June 19, with

Chris Holmes (Oxford) at 3pm on

Robust statistical decisions via re-weighted Monte Carlo samples

and Pierre Pudlo (iC3M, Université de Montpellier 2) at 4:15pm on [our joint work]

ABC and machine learning

bridging the gap between machine learning and statistics

Posted in pictures, Statistics, Travel, University life with tags , , , , , , on May 10, 2014 by xi'an

sunwar2Today in Warwick, I had a very nice discussion with Michael Betancourt on many statistical and computational issues but at one point in the conversation we came upon the trouble of bridging the gap between the machine learning and statistics communities. While a conference like AISTATS is certainly contributing to this, it does not reach the main bulk of the statistics community. Since, in Reykjavik, we had discussed the corresponding difficulty of people publishing a longer and “more” statistical paper in a “more” statistical journal, once the central idea was published in a machine learning conference proceeding like NIPS or AISTATS. we had this idea that creating a special fast-track in a mainstream statistics journal for a subset of those papers, using for instance a tailor-made committee in that original conference, or creating an annual survey of the top machine learning conference proceedings rewritten in a more” statistical way (and once again selected by an ad hoc committee) would help, at not too much of a cost for inducing machine learners to make the extra-effort of switching to another style. From there, we enlarged the suggestion to enlist a sufficient number of (diverse) bloggers in each major conference towards producing quick but sufficiently informative entries on their epiphany talks (if any), possibly supported by the conference organisers or the sponsoring societies. (I am always happy to welcome any guest blogger in conferences I attend!)

faculty positions in statistics at ENSAE, Paris

Posted in Statistics, University life with tags , , , , , , on April 28, 2014 by xi'an

Here is a call from ENSAE about two positions in statistics/machine learning, starting next semester:

ENSAE ParisTech and CREST is currently inviting applications for one position at the level associate or full professor from outstanding candidates having demonstrated abilities in both research and teaching. We are interested in candidates with a Ph.D. in Statistics or Machine Learning (or related field) whose research interests are in high dimensional statistical inference, learning theory or statistics of networks.

The appointment could begin as soon as September 1, 2014. The position is for an initial three-year term, with a possible renewal option in case of positive evaluation of research and teaching activities. Salary for suitably qualified applicants is competitive and commensurate with experience. The deadline for application is May 19, 2014.  Full details are given here for the first position and there for the second position.

AISTATS 2014 [day #3]

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on April 28, 2014 by xi'an

IMG_0574The third day at AISTATS 2014 started with Michael Jordan giving his plenary lecture, or rather three short talks on “Big Data” privacy, communication risk, and (bag of) bootstrap. I had not previously heard Michael talking about the first two topics and further found interesting the attempt to put computation into the picture (a favourite notion of Michael’s), however I was a bit surprised at the choice of a minimax criterion. Indeed, getting away from the minimax criterion was one of the major reasons I move to the B side of the Force. Because it puts exactly the same importance on every single value of the parameter. Even the most impossible ones. I was also a wee bit surprised at the optimal solution produced by this criterion: in a multivariate binary data setting (e.g., multiple drugs usage), the optimal privacy solution was to create a random binary vector and pick at random between this vector and its complement, depending on which one is closest to the observable. The loss of information seems formidable if the dimension of the vector is large. (Implementing ABC as a privacy [privacizing?] strategy would sound better if less optimal…) The next session was about deep learning, of which I knew [and know nothing], but the talk by Yoshua Bengio raised very relevant questions, like how to learn where the main part of the mass of a probability distribution is, besides pointing at a recent survey of his’. The survey points at some notions that I master and some that I don’t, but a cursory reading does not lead me to put an intuitive meaning on deep learning.

The last session of the day and of the conference was on more statistical issues, like a Gaussian process modelling of aKeflavik2 spatio-temporal dataset on Afghanistan attacks by Guido Sanguinetti, the use of Rao-Blackwellisation and control variate to build black-box variational inference by Rajesh Ranganath, the construction of  conditional exponential families on mixed graphs by Pradeep Ravikumar, and a presentation of probabilistic programming with Anglican by Frank Wood that I had already seen in Banff. In particular, I found the result on the existence of joint exponential families on graphs when defined by those full conditionals quite exciting!

The second poster session was in the early evening, with many more posters (and plenty of food and drinks!), as it also included the (non-refereed) MLSS posters. Among the many interesting ones I spotted, a way to hit-and-run for quasi-concave densities, estimating mixtures with negative weights, a failing particle algorithm for a flu epidemics, an exact EP algorithm, and a fairly intense discussion around Richard Wilkinson’s poster on Gaussian process ABC algorithm (that I discussed on the ‘Og a while ago).

Follow

Get every new post delivered to your Inbox.

Join 598 other followers