Archive for University of Oxford

last Big MC [seminar] before summer [June 19, 3pm]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , on June 17, 2014 by xi'an

crossing Rue Soufflot on my way to IHP from Vieux Campeur, March 28, 2013Last session of our Big’MC seminar at Institut Henri Poincaré this year, on Tuesday Thursday, June 19, with

Chris Holmes (Oxford) at 3pm on

Robust statistical decisions via re-weighted Monte Carlo samples

and Pierre Pudlo (iC3M, Université de Montpellier 2) at 4:15pm on [our joint work]

ABC and machine learning

lazy ABC

Posted in Books, Statistics, University life with tags , , , , , , , on June 9, 2014 by xi'an

“A more automated approach would be useful for lazy versions of ABC SMC algorithms.”

Dennis Prangle just arXived the work on lazy ABC he had presented in Oxford at the i-like workshop a few weeks ago. The idea behind the paper is to cut down massively on the generation of pseudo-samples that are “too far” from the observed sample. This is formalised through a stopping rule that puts the estimated likelihood to zero with a probability 1-α(θ,x) and otherwise divide the original ABC estimate by α(θ,x). Which makes the modification unbiased when compared with basic ABC. The efficiency appears when α(θ,x) can be computed much faster than producing the entire pseudo-sample and its distance to the observed sample. When considering an approximation to the asymptotic variance of this modification, Dennis derives a optimal (in the sense of the effective sample size) if formal version of the acceptance probability α(θ,x), conditional on the choice of a “decision statistic” φ(θ,x).  And of an importance function g(θ). (I do not get his Remark 1 about the case when π(θ)/g(θ) only depends on φ(θ,x), since the later also depends on x. Unless one considers a multivariate φ which contains π(θ)/g(θ) itself as a component.) This approach requires to estimate

\mathbb{P}(d(S(Y),S(y^o))<\epsilon|\varphi)

as a function of φ: I would have thought (non-parametric) logistic regression a good candidate towards this estimation, but Dennis is rather critical of this solution.

I added the quote above as I find it somewhat ironical: at this stage, to enjoy laziness, the algorithm has first to go through a massive calibration stage, from the selection of the subsample [to be simulated before computing the acceptance probability α(θ,x)] to the construction of the (somewhat mysterious) decision statistic φ(θ,x) to the estimation of the terms composing the optimal α(θ,x). The most natural choice of φ(θ,x) seems to be involving subsampling, still with a wide range of possibilities and ensuing efficiencies. (The choice found in the application is somehow anticlimactic in this respect.) In most ABC applications, I would suggest using a quick & dirty approximation of the distribution of the summary statistic.

A slight point of perplexity about this “lazy” proposal, namely the static role of ε, which is impractical because not set in stone… As discussed several times here, the tolerance is a function of many factors incl. all the calibration parameters of the lazy ABC, rather than an absolute quantity. The paper is rather terse on this issue (see Section 4.2.2). It seems to me that playing with a large collection of tolerances may be too costly in this setting.

¼th i-like workshop in St. Anne’s College, Oxford

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , on March 27, 2014 by xi'an

IMG_0153Due to my previous travelling to and from Nottingham for the seminar and back home early enough to avoid the dreary evening trains from Roissy airport (no luck there, even at 8pm, the RER train was not operating efficiently!, and no fast lane is planed prior to 2023…), I did not see many talks at the i-like workshop. About ¼th, roughly… I even missed the poster session (and the most attractive title of Lazy ABC by Dennis Prangle) thanks to another dreary train ride from Derby to Oxford.

IMG_0150As it happened I had already heard or read parts of the talks in the Friday morning session, but this made understanding them better. As in Banff, Paul Fearnhead‘s talk on reparameterisations for pMCMC on hidden Markov models opened a wide door to possible experiments on those algorithms. The examples in the talk were mostly of the parameter duplication type, somewhat creating unidentifiability to decrease correlation, but I also wondered at the possibility of introducing frequent replicas of the hidden chain in order to fight degeneracy. Then Sumeet Singh gave a talk on the convergence properties of noisy ABC for approximate MLE. Although I had read some of the papers behind the talk, it made me realise how keeping balls around each observation in the ABC acceptance step was not leading to extinction as the number of observations increased. (Summet also had a good line with his ABCDE algorithm, standing for ABC done exactly!) Anthony Lee covered his joint work with Krys Łatuszyński on the ergodicity conditions on the ABC-MCMC algorithm, the only positive case being the 1-hit algorithm as discussed in an earlier post. This result will hopefully get more publicity, as I frequently read that increasing the number of pseudo-samples has no clear impact on the ABC approximation. Krys Łatuszyński concluded the morning with an aggregate of the various results he and his co-authors had obtained on the fascinating Bernoulli factory. Including constructive derivations.

After a few discussions on and around research topics, it was too soon time to take advantage of the grand finale of a March shower to walk from St. Anne’s College to Oxford Station, in order to start the trip back home. I was lucky enough to find a seat and could start experimenting in R the new idea my trip to Nottingham had raised! While discussing a wee bit with my neighbour, a delightful old lady from the New Forest travelling to Coventry, recovering from a brain seizure, wondering about my LaTeX code syntax despite the tiny fonts, and who most suddenly popped a small screen from her bag to start playing Candy Crush!, apologizing all the same. The overall trip was just long enough for my R code to validate this idea of mine, making this week in England quite a profitable one!!! IMG_0145

i-like Oxford [workshop, March 20-21, 2014]

Posted in Statistics, Travel, University life with tags , , , , , on February 5, 2014 by xi'an

There will be another i-like workshop this Spring, over two days in Oxford, St Anne’s College, involving talks by Xiao-Li Meng and Eric Moulines, as well as by researchers from the participating universities. Registration is now open. (I will take part as a part-time participant, travelling from Nottingham where I give a seminar on the 20th.)

OxWaSP (The Oxford-Warwick Statistics Programme)

Posted in Kids, Statistics, University life with tags , , , , , , on January 21, 2014 by xi'an

University of Warwick, May 31 2010This is an official email promoting OxWaSP, our joint doctoral training programme, which I [objectively] think is definitely worth considering if planning a PhD in Statistics. Anywhere.

The Statistics Department – University of Oxford and the Statistics Department – University Of Warwick, supported by the EPSRC, will run a joint Centre of Doctoral Training in the theory, methods and applications of Statistical Science for 21st Century data-intensive environments and large-scale models. This is the first centre of its type in the World and will equip its students to work in an area in growing demand both in academia and industry.Oxford, Feb. 23, 2012

Each year from October 2014 OxWaSP will recruit at least 5 students attached to Warwick and at least 5 attached to Oxford. Each student will be funded with a grant for four years of study. Students spend the first year at Oxford developing advanced skills in statistical science. In the first two terms students are given research training through modular courses: Statistical Inference in Complex Models; Multivariate Stochastic Processes; Bayesian Analyses for Complex Structural Information; Machine Learning and Probabilistic Graphical Models; Stochastic Computation for Intractable Inference. In the third term, students carry out two small research projects. At the end of year 1, students begin a three-year research project with a chosen supervisor, five continuing at Oxford and five moving to the University of Warwick.

Training in years 2-4 includes annual retreats, workshops and a research course in machine learning at Amazon (Berlin). There are funded opportunities for students to work with our leading industrial partners and to travel in their third year to an international summer placement in some of the strongest Statistics groups in the USA, Europe and Asia including UC Berkeley, Columbia University, Duke University, the University of Washington in Seattle, ETH Zurich and NUS Singapore.

Applications will be considered in gathered fields with the next deadline of 24 January 2014 (Non-EU applicants should apply by this date to maximise their chance of funding). Interviews for successful applicants who submit by the January deadline will take place at the end of February 2014. There will be a second deadline for applications at the end of February (Warwick) and 14th March (Oxford).

generalised Savage-Dickey ratio

Posted in Statistics, University life with tags , , , , , , , , on November 11, 2013 by xi'an

KONICA MINOLTA DIGITAL CAMERAToday, Ewan Cameron arXived a paper that generalises our Robert and Marin (2010) paper on the measure theoretic difficulties (or impossibilities) of the Savage-Dickey ratio and on the possible resolutions. (A paper of mine’s I like very much despite it having neither impact nor quotes, whatsoever! Until this paper.) I met Ewan last year when he was completing a PhD with Tony Pettitt at QUT in astrostatistics, but he also worked did not work on this transdimensional ABC algorithm with application to worm invasion in Northern Alberta (arXive I reviewed last week)… Ewan also runs a blog called Another astrostatistics blog, full of goodies, incl. the one where he announces he moves to… zoology in Oxford! Anyway, this note extends our paper and a mathematically valid Savage-Dickey ratio representation to the case when the posterior distributions have no density against the Lebesgue measure. For instance for Dirichlet processes or Gaussian processes priors. Using generic Radon-Nykodim derivatives instead. The example is somewhat artificial, superimposing a Dirichlet process prior onto the Old faithful benchmark. But this is an interesting entry, worth mentioning, into the computation of Bayes factors. And the elusive nature of the Savage-Dickey ratio representation.

Le Monde puzzle [#838]

Posted in Kids, Books, R with tags , , , , , , , , , , on November 2, 2013 by xi'an

Another one of those Le Monde mathematical puzzles which wording is confusing to me:

The 40 members of the Academy vote for two prizes. [Like the one recently attributed to my friend and coauthor Olivier Cappé!] Once the votes are counted for both prizes, it appears that the total votes for each of the candidates take all values between 0 and 12. Is it possible that two academicians never pick the same pair of candidates?

I find it puzzling… First because the total number of votes is then equal to 78, rather than 80=2 x 40. What happened to the vote of the “last” academician? Did she or he abstain? Or did two academicians abstain on candidates for only one prize each?  Second, because of the incertitude in the original wording: can we assume with certainty that each integer between 0 and 12 is only taken once? If so, it would mean that the total number of candidates to the prizes is equal to 13. Third, the question seems unrelated with the “data”: since sums only are known, switching the votes of academicians Dupond and Dupont for candidates Durand and Martin in prize A (or in prize B) does not change the number of votes for Durand and Martin.

If we assume that each integer between 0 and 12 only appears once in the collection of the sums of the votes and that one academician abstained on both prizes, the number of candidates for one of the prizes can vary between 4 and 9, with compatible solutions provided by this R line of code:

N=5
ok=TRUE
while (ok){
  prop=sample(0:12,N)
  los=(1:13)[-(prop+1)]-1
  ok=((sum(prop)!=39)||(sum(los)!=39))}

which returns solutions like

> N=5
> prop
[1]  9 11  7 12
> los
[1]  0  1  2  3  4  5  6  8 10

but does not help in answering the question!

Now, with Robin‘s help, (whose Corcoran memorial prize I should have mentioned in due time!), I reformulate the question as

The 40 members of the Academy vote for two prizes. Once the votes are counted for both prizes, it appears that all values between 0 and 12 are found among the total votes for each of the candidates. Is it possible that two academicians never pick the same pair of candidates?

which has a nicer solution: since all academicians have voted there are two extra votes (40-38), meaning either twice 2 or thrice 1. So there are either 14 or 15 candidates ex toto.  With at least 4 for a given prize. I then checked whether or not the above event could occur, using the following (pedestrian) R code:

for (t in 1:10^3){
  #pick number of replicae
  R=sample(1:2,1); cand=13+R
  #pick number of literary candidates
  N=sample(4:(cand-4),1)
  #pick votes
  if (R==2){
    votes=c(1,1,0:12)
    }else{
      votes=c(2,0:12)}
  #correct number of votes
  ok=TRUE
  while (ok){
    drop=sample(1:cand,N)
    los=sort(votes[-drop])
    prop=sort(votes[drop])
    ok=((sum(prop)!=40)||(sum(los)!=40))
    }
  #individual votes for scientific candidates
  pool=NULL
  for (j in 1:N)
    pool=c(pool,rep(j,prop[j]))
  #individual votes for literary candidates
  cool=NULL
  for (j in 1:(cand-N))
    cool=c(cool,rep(100+j,los[j]))
  cool=sample(cool) #random permutation
  #compare votes
  for (a in 1:39){
    same=((a+1):40)[pool[(a+1):40]==pool[a]]
    if (length(same)>0){
      stoq=max(cool[same]==cool[a])
      if (stoq==1) break()
      }
    }
  if (stoq==0) break()
}

which does not return a positive answer to the above question. (And does not require simulations from contingency tables with fixed margins!)

Follow

Get every new post delivered to your Inbox.

Join 721 other followers