Archive for SMC²

parallelizable sampling method for parameter inference of large biochemical reaction models

Posted in Books, Statistics with tags , , , , , , , , on June 18, 2018 by xi'an

I came across this older (2016) arXiv paper by Jan Mikelson and Mustafa Khammash [antidated as of April 25, 2018] as another version of nested sampling. The novelty of the approach is in applying nested sampling for approximating the likelihood function in the case of involved hidden Markov models (although the name itself does not appear in the paper). This is an interesting proposal, even though there is a fairly large and very active literature on computational approaches to such objects, from sequential Monte Carlo (SMC) to particle MCMC (pMCMC), to SMC².

“We found a way to efficiently sample parameter vectors (particles) from the super level set of the likelihood (sets of particles with a likelihood equal to or higher than some threshold) corresponding to an increasing sequence of thresholds” (p.2)

The approach here is an aggregate of nested sampling and particle filters (SMC), filters that are paradoxically employed in approximating the likelihood function itself, thus called repeatedly as the value of the parameter θ changes, unless I am confused, when it seems to me that, once started with particle filters, the authors could have used them all the way to the upper level (through, again, SMC²). Instead, and that brings a further degree of (uncorrected) approximation to the procedure, a Dirichlet process prior is used to estimate Gaussian mixture approximations to the true posterior distribution(s) on the (super) level sets. Now, approximating a distribution that is zero outside a compact set [the prior restricted to the likelihood being larger than by a distribution with an infinite support does not a priori sound like a particularly enticing idea. Note also that there is no later correction for using the mixture approximation to the restricted prior. (The method also involves an approximation of the (Lebesgue) volume of the level sets that may be poor in higher dimensions.)

“DP-GMM estimations work very well in high dimensional spaces and since we use rejection sampling to obtain samples from the level set by sampling from the DP-GMM estimation, the estimation error does not get propagated through iterations.” (p.13)

One aspect of the paper that puzzles me is the use of a rejection sampler to produce new parameters simulations from a given (super) level set, as this involves a lower bound M on the Gaussian mixture approximation over this level set. If a Gaussian mixture approximation is available, there is apparently no need for this as it can be sampled directly and values below the threshold can be disposed of. It is also unclear why the error does not propagate from one level to the next, if only because of the connection between the successive particle approximations.

 

the Hyvärinen score is back

Posted in pictures, Statistics, Travel with tags , , , , , , , , , , , , , on November 21, 2017 by xi'an

Stéphane Shao, Pierre Jacob and co-authors from Harvard have just posted on arXiv a new paper on Bayesian model comparison using the Hyvärinen score

\mathcal{H}(y, p) = 2\Delta_y \log p(y) + ||\nabla_y \log p(y)||^2

which thus uses the Laplacian as a natural and normalisation-free penalisation for the score test. (Score that I first met in Padova, a few weeks before moving from X to IX.) Which brings a decision-theoretic alternative to the Bayes factor and which delivers a coherent answer when using improper priors. Thus a very appealing proposal in my (biased) opinion! The paper is mostly computational in that it proposes SMC and SMC² solutions to handle the estimation of the Hyvärinen score for models with tractable likelihoods and tractable completed likelihoods, respectively. (Reminding me that Pierre worked on SMC² algorithms quite early during his Ph.D. thesis.)

A most interesting remark in the paper is to recall that the Hyvärinen score associated with a generic model on a series must be the prequential (predictive) version

\mathcal{H}_T (M) = \sum_{t=1}^T \mathcal{H}(y_t; p_M(dy_t|y_{1:(t-1)}))

rather than the version on the joint marginal density of the whole series. (Followed by a remark within the remark that the logarithm scoring rule does not make for this distinction. And I had to write down the cascading representation

\log p(y_{1:T})=\sum_{t=1}^T \log p(y_t|y_{1:t-1})

to convince myself that this unnatural decomposition, where the posterior on θ varies on each terms, is true!) For consistency reasons.

This prequential decomposition is however a plus in terms of computation when resorting to sequential Monte Carlo. Since each time step produces an evaluation of the associated marginal. In the case of state space models, another decomposition of the authors, based on measurement densities and partial conditional expectations of the latent states allows for another (SMC²) approximation. The paper also establishes that for non-nested models, the Hyvärinen score as a model selection tool asymptotically selects the closest model to the data generating process. For the divergence induced by the score. Even for state-space models, under some technical assumptions.  From this asymptotic perspective, the paper exhibits an example where the Bayes factor and the Hyvärinen factor disagree, even asymptotically in the number of observations, about which mis-specified model to select. And last but not least the authors propose and assess a discrete alternative relying on finite differences instead of derivatives. Which remains a proper scoring rule.

I am quite excited by this work (call me biased!) and I hope it can induce following works as a viable alternative to Bayes factors, if only for being more robust to the [unspecified] impact of the prior tails. As in the above picture where some realisations of the SMC² output and of the sequential decision process see the wrong model being almost acceptable for quite a long while…

anytime algorithm

Posted in Books, Statistics with tags , , , , , , , , , on January 11, 2017 by xi'an

Lawrence Murray, Sumeet Singh, Pierre Jacob, and Anthony Lee (Warwick) recently arXived a paper on Anytime Monte Carlo. (The earlier post on this topic is no coincidence, as Lawrence had told me about this problem when he visited Paris last Spring. Including a forced extension when his passport got stolen.) The difficulty with anytime algorithms for MCMC is the lack of exchangeability of the MCMC sequence (except for formal settings where regeneration can be used).

When accounting for duration of computation between steps of an MCMC generation, the Markov chain turns into a Markov jump process, whose stationary distribution α is biased by the average delivery time. Unless it is constant. The authors manage this difficulty by interlocking the original chain with a secondary chain so that even- and odd-index chains are independent. The secondary chain is then discarded. This provides a way to run an anytime MCMC. The principle can be extended to K+1 chains, run one after the other, since only one of those chains need be discarded. It also applies to SMC and SMC². The appeal of anytime simulation in this particle setting is that resampling is no longer a bottleneck. Hence easily distributed among processors. One aspect I do not fully understand is how the computing budget is handled, since allocating the same real time to each iteration of SMC seems to envision each target in the sequence as requiring the same amount of time. (An interesting side remark made in this paper is the lack of exchangeability resulting from elaborate resampling mechanisms, lack I had not thought of before.)

ISBA regional meeting in Varanasi (day 3)

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on January 11, 2013 by xi'an

plaque in the department of mathematical sciences. BHU, Varanasi, Uttar Pradesh, Jan. 10, 2013On the last/my day of the ISBA meeting in Varanasi, I attended a few talks before being kindly driven to the airport (early, too early, but with the unpredictable traffic there, it was better to err on the cautionary side!). In the dynamical model session, Simon Wilson presented a way to approximate posteriors for HMMs based on Chib’s (or Bayes’!) formula, while Jonathan Stroud exposed another approach to state-space model approximation involving a move of the state parameter based on a normal approximation of its conditional given the observable, approximation which seemed acceptable for the cloud analysis model he was processing. Nicolas Chopin then gave a quick introduction to particle MCMC, all the way to SMC². (As a stern chairmain of the session, I know Nicolas felt he did not have enough time but he did a really good job of motivating those different methods, in particular in explaining why the auxiliary variable approach makes the unbiased estimator of the likelihood a valid MCMC method.) Peter Green’s plenary talk was about a emission tomography image analysis whose statistical processing turned into a complex (Bernstein-von Mises) convergence theorem (whose preliminary version I saw in Bristol during Natalia Bochkina’s talk).

boats on the Ganges before sunset, Jan. 8, 2013Overall, as forewarned by and expected from the program, this ISBA meeting was of the highest scientific quality. (I only wish I had had hindi god abilities to duplicate and attend several parallel sessions at the same time!) Besides, much besides!, the wamr attention paid to everyone by the organisers was just simply un-be-lie-vable! The cultural program went in par with the scientific program. The numerous graduate students and faculty involved in the workshop organisation had a minute knowledge of our schedules and locations, and were constantly anticipating our needs and moves. Almost to a fault, i.e. to a point that was close to embarassing for our cultural habits. I am therefore immensely grateful [personally and as former ISBA president] to all those people that contributed to the success of this ISBA meeting and first and foremost to Professor Satyanshu Upadhyay who worked relentlessly towards this goal during many months! (As a conference organiser, I realise I was and am simply unable to provide this level of welcome to the participants, even for much smaller meetings… The contrast with my previous conference in Berlin could not be more extreme as, for a much higher registration fee, the return was very, very limited.) I will forever (at least until my next reincarnation!) keep the memory of this meeting as a very special one, quite besides giving me the opportunity of my first visit to India

ABC and SMC²

Posted in Books, Statistics, University life with tags , , , , , , on January 17, 2012 by xi'an

Today, Nicolas Chopin gave his talk at CREST. While he tried to encompass as much as possible of the background on ABC for a general audience (he is also giving the same talk in Nice this week), the talk led me to think about the parameterisation of ABC methods. He chose a non-parametric presentation, as in Fearnhead and Prangle. From this viewpoint, the choice of the kernel K and of the distance measure should not matter so much, when compared with the choice of the bandwith/tolerance. Further, the non-parametric flavour tells us that the tolerance itself can be optimised for a given sample size, i.e. in ABC settings a given number of iterations. When looking at MCMC-ABC I briefly wondered at whether or not the tolerance should be optimised against the Metropolis-Hastings acceptance probability. Because from this perspective the ratio of the kernels is a ratio of estimators of the densities of the data at both values of the parameter(s). (Rather than an estimator of the ratio.)

The second part of Nicolas’ talk was about SMC² and hence unrelated to ABC, except that he mentioned that SMC is an (unbiased) approximation for the Metropolis-Hastings acceptance probability. Which is also an interpretation of the ideal  ABC (zero tolerance) and the noisy ABC. (Plus, Marc Beaumont is a common denominator for both perspectives!) Unfortunately, Nicolas ran out of time (because of a tight schedule) and did not give much detail on SMC². Overall, his motivational introduction was quite worth it, though! Here are his slides:

This talk also led me to reflect on the organisation of my incoming PhD class on ABC: it should include

  • justify MCMC-ABC from first principles, for regular and noisy ABC;
  • aggregate the literature on non-parametric justifications of ABC. The 2002 paper by Beaumont et al. remains a reference there;
  • understand the link with indirect inference (the book by Gouriéroux and Monfort is sitting on my desk!);
  • answer the questions “is aBc the sole and unique solution?” and “how can we evaluate/estimate the error?”;

ABC and Monte Carlo seminar in CREST

Posted in Statistics, University life with tags , , , , , , , on January 13, 2012 by xi'an

On Monday (Jan. 16, 3pm, CRESTENSAE, Room S08), Nicolas Chopin will present a talk on:

Dealing with intractability: recent advances in Bayesian Monte-Carlo methods for intractable likelihoods
(joint works with P. Jacob, O. Papaspiliopoulos and S. Barthelmé)

This talk will start with a review of recent advancements in Monte Carlo methodology for intractable problems; that is problems involving intractable quantities, typically intractable likelihoods. I will discuss in turn ABC type methods (a.k.a. likelihood-free), auxiliary variable methods for dealing with intractable normalising constants (e.g. the exchange algorithm), and MC² type of algorithms, a recent extension of which being the PMCMC algorithm (Andrieu et al., 2010). Then, I will present two recent pieces of work in these direction. First, and more briefly briefly, I’ll present the ABC-EP algorithm (Chopin and Barthelmé, 2011). I’ll also discuss some possible future research in ABC theory. Second, I’ll discuss the SMC² algorithm (Chopin, Jacob and Papaspiliopoulos, 2011), a new type of MC² algorithm that makes it possible to perform sequential analysis for virtually any state-space models, including models with an intractable Markov transition.

Catching up faster by switching sooner

Posted in R, Statistics, University life with tags , , , , , , , , on October 26, 2011 by xi'an

Here is our discussion (with Nicolas Chopin) of the Read Paper of last Wednesday by T. van Erven, P. Grünwald and S. de Rooij (Centrum voor Wiskunde en Informatica, Amsterdam), entitled Catching up faster by switching sooner: a predictive approach to adaptive estimation with an application to the Akaike information criterion–Bayesian information criterion dilemma. It is still available for written discussions, to be published in Series B. Even though the topic is quite tangential to our interests, the fact that the authors evolve in a Bayesian environment called for the following (my main contribution being in pointing out that the procedure is not Bayesian by failing to incorporate the switch in the predictive (6), hence using the same data for all models under competition…):

Figure 1 – Bayes factors of Model 2 vs.~Model 1 (gray line) and Model 3 vs.~Model 1 (dark line), plotted against the number of observations, i.e. of iterations, when comparing three stochastic volatility models; see Chopin et al. (2011) for full details.

This paper is an interesting attempt at a particularly important problem. We nonetheless believe more classical tools should be used instead if models are truly relevant in the inference led by the authors: Figure 1, reproduced from Chopin et al. (2011), plots [against time] the Bayes factors of Models 2 and 3 vs. Model 1, where all models are state-space models of increasing complexity, fitted to some real data. In this context, one often observes that more complex models need more time to “ascertain themselves”. On the other hand, even BMA based prediction is a very challenging computational problem (the only generic solution currently being the SMC² algorithm of the aforementioned paper), and we believe that the current proposed predictive strategy will remain too computationally expensive for practical use for nonlinear state-space models.

For other classes of models, since the provable methods put forward by this paper are based on “frozen strategies”, which are hard to defend from a modelling perspective, and since the more reasonable “basic switch” strategy seems to perform as well numerically, we would be curious to see how the proposed methods compare to predictive distributions obtained from genuine Bayesian models. A true change point model for instance would generate a coherent prediction strategy, which is not equivalent to the basic switch strategy. (Indeed, for one thing, the proposal made by the authors utilises the whole past to compute the switching probabilities, rather than allocating the proper portion of the data to the relevant model. In this sense, the proposal is “using the data [at least] twice” in a pseudo-Bayesian setting, similar to Aitkin’s, 1991.) More generally, the authors seem to focus on situations where the true generative process is a non-parametric class, and the completed models is an infinite sequence of richer and richer—but also of more and more complex—parametric models, which is a very sensible set-up in practice. Then, we wonder whether or not it would make more sense to set the prior distribution over the switch parameter s in such a way that (a) switches only occurs from one model to another model with greater complexity and (b) the number of switches is infinite.

For ABC readers, note the future Read Paper meeting on December 14 by Paul Fearnhead and Dennis Prangle.