Archive for Spain

AISTATS 2016 [post-submissions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 22, 2015 by xi'an

Now that the deadline for AISTATS 2016 submissions is past, I can gladly report that we got the amazing number of 559 submissions, which is much more than what was submitted to the previous AISTATS conferences. To the point it made us fear for a little while [but not any longer!] that the conference room was not large enough. And hope that we had to install video connections in the hotel bar!

Which also means handling about the same amount of papers as a year of JRSS B submissions within a single month!, the way those submissions are handled for the AISTATS 2016 conference proceedings. The process is indeed [as in other machine learning conferences] to allocate papers to associate editors [or meta-reviewers or area chairs] with a bunch of papers and then have those AEs allocate papers to reviewers, all this within a few days, as the reviews have to be returned to authors within a month, for November 16 to be precise. This sounds like a daunting task but it proceeded rather smoothly due to a high degree of automation (this is machine-learning, after all!) in processing those papers, thanks to (a) the immediate response to the large majority of AEs and reviewers involved, who bid on the papers that were of most interest to them, and (b) a computer program called the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. Which tremendously helps with managing about everything! Even when accounting for the more formatted entries in such proceedings (with an 8 page limit) and the call to the conference participants for reviewing other papers, I remain amazed at the resulting difference in the time scales for handling papers in the fields of statistics and machine-learning. (There was a short lived attempt to replicate this type of processing for the Annals of Statistics, if I remember well.)

snapshot from Madrid

Posted in pictures, Statistics, Travel, University life with tags , , on October 9, 2015 by xi'an

I am in Madrid for the day, discussing with friends here the details of a collaboration to a Spanish Antarctica project on wildlife. Which is of course a most exciting prospect!

AISTATS 2016 [call for submissions]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on August 21, 2015 by xi'an

At the last (European) AISTATS 2014, I agreed to be the program co-chair for AISTATS 2016, along with Arthur Gretton from the Gatsby Unit, at UCL. (AISTATS stands for Artificial Intelligence and Statistics.) Thanks to Arthur’s efforts and dedication, as the organisation of an AISTATS meeting is far more complex than any conference I have organised so far!, the meeting is taking shape. First, it will take place in Cadiz, Andalucía, Spain, on May 9-11, 2016. (A place more related to the conference palm tree logo than the previous location in Reykjavik, even though I would be the last one to complain it took place in Iceland!)

Second, the call for submissions is now open. The process is similar to other machine learning conferences in that papers are first submitted for the conference proceedings, then undergo a severe and tight reviewing process, with a response period for the authors to respond to the reviewers’ comments, and that only the accepted papers can be presented as posters, some of which are selected for an additional oral presentation. The major dates for submitting to AISTATS 2016 are

Proceedings track paper submission deadline 23:59UTC Oct 9, 2015
Proceedings track initial reviews available Nov 16, 2015
Proceedings track author feedback deadline Nov 23, 2015
Proceedings track paper decision notifications Dec 20, 2015

With submission instructions available at this address. Including the electronic submission site.

I was quite impressed by the quality and intensity of the AISTATS 2014 conference, which is why I accepted so readily being program co-chair, and hence predict an equally rewarding AISTATS 2016, thus encouraging all interested ‘Og’s readers to consider submitting a paper there! Even though I confess it will make a rather busy first semester for 2016, between MCMSki V in January, the CIRM Statistics month in February, the CRiSM workshop on Eatimating constants in April, AISTATS 2016 thus in May, and ISBA 2016 in June…

carrer de Thomas Bayes [formulas magistrales]

Posted in pictures, Statistics, Travel with tags , , , , on June 11, 2015 by xi'an


An objective prior that unifies objective Bayes and information-based inference

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , on June 8, 2015 by xi'an

vale9During the Valencia O’Bayes 2015 meeting, Colin LaMont and Paul Wiggins arxived a paper entitled “An objective prior that unifies objective Bayes and information-based inference”. It would have been interesting to have the authors in Valencia, as they make bold claims about their w-prior as being uniformly and maximally uninformative. Plus achieving this unification advertised in the title of the paper. Meaning that the free energy (log transform of the inverse evidence) is the Akaike information criterion.

The paper starts by defining a true prior distribution (presumably in analogy with the true value of the parameter?) and generalised posterior distributions as associated with any arbitrary prior. (Some notations are imprecise, check (3) with the wrong denominator or the predictivity that is supposed to cover N new observations on p.2…) It then introduces a discretisation by considering all models within a certain Kullback divergence δ to be undistinguishable. (A definition that does not account for the assymmetry of the Kullback divergence.) From there, it most surprisingly [given the above discretisation] derives a density on the whole parameter space

\pi(\theta) \propto \text{det} I(\theta)^{1/2} (N/2\pi \delta)^{K/2}

where N is the number of observations and K the dimension of θ. Dimension which may vary. The dependence on N of the above is a result of using the predictive on N points instead of one. The w-prior is however defined differently: “as the density of indistinguishable models such that the multiplicity is unity for all true models”. Where the log transform of the multiplicity is the expected log marginal likelihood minus the expected log predictive [all expectations under the sampling distributions, conditional on θ]. Rather puzzling in that it involves the “true” value of the parameter—another notational imprecision, since it has to hold for all θ’s—as well as possibly improper priors. When the prior is improper, the log-multiplicity is a difference of two terms such that the first term depends on the constant used with the improper prior, while the second one does not…  Unless the multiplicity constraint also determines the normalising constant?! But this does not seem to be the case when considering the following section on normalising the w-prior. Mentioning a “cutoff” for the integration that seems to pop out of nowhere. Curiouser and curiouser. Due to this unclear handling of infinite mass priors, and since the claimed properties of uniform and maximal uninformativeness are not established in any formal way, and since the existence of a non-asymptotic solution to the multiplicity equation is neither demonstrated, I quickly lost interest in the paper. Which does not contain any worked out example. Read at your own risk!

Valencia snapshot [#2]

Posted in Kids, pictures, Running, Travel with tags , , , , on June 5, 2015 by xi'an


Valencia snapshot

Posted in pictures, Statistics, Travel, University life, Wines with tags , , , , on June 4, 2015 by xi'an



Get every new post delivered to your Inbox.

Join 946 other followers