Archive for MLSS

MLSS 2016: machine learning summer school in Cádiz [deadline]

Posted in Kids, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , on March 11, 2016 by xi'an

Following [time-wise] the AISTATS 2016 meeting, a machine learning school is organised in Cádiz (as is the tradition for AISTATS meetings in Europe, i.e., in even years). With an impressive [if downright scary] poster! There is no strong statistics component in the programme, apart from a course by Tamara Broderick on non-parametric Bayes, but the list of speakers is impressive and the ten day school is worth recommending for all interested students.  (I remember giving a short course at MLSS 2004 on Berder Island in Brittany, with the immediate reward of running the Auray-Vannes half-marathon that year…) The deadline for applications is March 25, 2016.

the ABC-SubSim algorithm

Posted in pictures, Statistics with tags , , , , , , on April 29, 2014 by xi'an

cutcut3In a nice coincidence with my ABC tutorial at AISTATS 2014 – MLSS, Manuel Chiachioa, James Beck, Juan Chiachioa, and Guillermo Rus arXived today a paper on a new ABC algorithm, called ABC-SubSim. The SubSim stands for subset simulation and corresponds to an approach developed by one of the authors for rare-event simulation. This approach looks somewhat similar to the cross-entropy method of Rubinstein and Kroese, in that successive tail sets are created towards reaching a very low probability tail set. Simulating from the current subset increases the probability to reach the following and less probable tail set. The extension to the ABC setting is done by looking at the acceptance region (in the augmented space) as a tail set and by defining a sequence of tolerances.  The paper could also be connected with nested sampling in that constrained simulation through MCMC occurs there as well. Following the earlier paper, the MCMC implementation therein is a random-walk-within-Gibbs algorithm. This is somewhat the central point in that the sample from the previous tolerance level is used to start a Markov chain aiming at the next tolerance level. (Del Moral, Doucet and Jasra use instead a particle filter, which could easily be adapted to the modified Metropolis move considered in the paper.) The core difficulty with this approach, not covered in the paper, is that the MCMC chains used to produce samples from the constrained sets have to be stopped at some point, esp. since the authors run those chains in parallel. The stopping rule is not provided (see, e.g., Algorithm 3) but its impact on the resulting estimate of the tail probability could be far from negligible… Esp. because there is no burnin/warmup. (I cannot see how “ABC-SubSim exhibits the benefits of perfect sampling” as claimed by the authors, p. 6!)  The authors re-examined the MA(2) toy benchmark we had used in our earlier survey, reproducing as well the graphical representation on the simplex as shown above.

AISTATS 2014 / MLSS tutorial

Posted in Mountains, R, Statistics, University life with tags , , , , , , , , , , on April 26, 2014 by xi'an

Here are the slides of the tutorial on ABC methods I gave yesterday at both AISTAST 2014 and MLSS. (I actually gave a tutorial at another MLSS a few years ago, on the pretty island of Berder in Brittany, next to Vannes.) They are definitely similar to previous talks and tutorials I delivered on this topic of ABC algorithms, with only the last part being original (if unpublished yet). And even then: as Michael Gutmann from the University of Helsinki pointed out to me at the end of my talk, there are similarities between the classification method he exposed at MCMSki 4 in Chamonix and our use of random forests. Before my talk, I attended the tutorial of Roderick Murray-Smith from the University of Glasgow, on Machine learning and Human Computer Interaction, which was just stunning in its breadth, range of applications, and mastering of multimedia tools. Making me feel like a perfectly inadequate follower…