Archive for discretization

EntropyMCMC [R package]

Posted in Statistics with tags , , , , , , , , , , , , on March 26, 2019 by xi'an

My colleague from the Université d’Orléans, Didier Chauveau, has just published on CRAN a new R package called EntropyMCMC, which contains convergence assessment tools for MCMC algorithms, based on non-parametric estimates of the Kullback-Leibler divergence between current distribution and target. (A while ago, quite a while ago!, we actually collaborated with a few others on the Springer-Verlag Lecture Note #135 Discretization and MCMC convergence assessments.) This follows from a series of papers by Didier Chauveau and Pierre Vandekerkhove that started with a nearest neighbour entropy estimate. The evaluation of this entropy is based on N iid (parallel) chains, which involves a parallel implementation. While the missing normalising constant is overwhelmingly unknown, the authors this is not a major issue “since we are mostly interested in the stabilization” of the entropy distance. Or in the comparison of two MCMC algorithms. [Disclaimer: I have not experimented with the package so far, hence cannot vouch for its performances over large dimensions or problematic targets, but would as usual welcome comments and feedback on readers’ experiences.]

capture-recapture with continuous covariates

Posted in Books, pictures, Statistics, University life with tags , , , , , on September 14, 2015 by xi'an

This morning, I read a paper by Roland Langrock and Ruth King in a 2013 issue of Annals of Applied Statistics that had gone too far under my desk to be noticed… This problem of using continuous variates in capture-recapture models is a frustrating one as it is not clear what one should do at times the subject and therefore its covariates are not observed. This is why I was quite excited by the [trinomial] paper of Catchpole, Morgan, and Tavecchia when they submitted it to JRSS Series B and I was the editor handling it. In the current paper Langrock and King build a hidden Markov model on the capture history (as in Jérôme Dupui’s main thesis paper, 1995), as well as a discretised Markov chain model on the covariates and a logit connection between those covariates and the probability of capture. (At first, I thought the Markov model was a sheer unconstrained Markov chain on the discretised space and found curious that increasing the number of states had a positive impact on the estimation but, blame my Métro environment!, I had not read the paper carefully.)

“The accuracy of the likelihood approximation increases with increasing m.” (p.1719)

While I acknowledge that something has to be done about the missing covariates, and that this approach may be the best one can expect in such circumstances, I nonetheless disagree with the above notion that increasing the discretisation step m will improve the likelihood approximation, simply because the model on the covariates that was chosen ex nihilo has no reason to fit the real phenomenon, especially since the value of the covariates impact the probability of capture: the individuals are not (likely to get) missing at random, i.e., independently from the covariates. For instance, in a lizard study on which Jérôme Dupuis worked in the early 1990’s, weight and survival were unsurprisingly connected, with a higher mortality during the cold months where food was sparse. Using autoregressive-like models on the covariates is missing the possibility of sudden changes in the covariates that could impact the capture patterns. I do not know whether or not this has been attempted in this area, but connecting the covariates between individuals at a specific time, so that missing covariates can be inferred from observed covariates, possibly with spatial patterns, would also make sense.

In fine, I fear there is a strong and almost damning limitation to the notion of incorporating covariates into capture-recapture models, namely, if a covariate is determinantal in deciding of a capture or non-capture, the non-capture range of the covariate will never be observed and hence cannot be derived from the observed values.

evaluating stochastic algorithms

Posted in Books, R, Statistics, University life with tags , , , , , , , , on February 20, 2014 by xi'an

Reinaldo sent me this email a long while ago

Could you recommend me a nice reference about 
measures to evaluate stochastic algorithms (in 
particular focus in approximating posterior 
distributions).

and I hope he is still reading the ‘Og, despite my lack of prompt reply! I procrastinated and procrastinated in answering this question as I did not have a ready reply… We have indeed seen (almost suffered from!) a flow of MCMC convergence diagnostics in the 90’s.  And then it dried out. Maybe because of the impossibility to be “really” sure, unless running one’s MCMC much longer than “necessary to reach” stationarity and convergence. The heat of the dispute between the “single chain school” of Geyer (1992, Statistical Science) and the “multiple chain school” of Gelman and Rubin (1992, Statistical Science) has since long evaporated. My feeling is that people (still) run their MCMC samplers several times and check for coherence between the outcomes. Possibly using different kernels on parallel threads. At best, but rarely, they run (one or another form of) tempering to identify the modal zones of the target. And instances where non-trivial control variates are available are fairly rare. Hence, a non-sequitur reply at the MCMC level. As there is no automated tool available, in my opinion. (Even though I did not check the latest versions of BUGS.)

As it happened, Didier Chauveau from Orléans gave today a talk at Big’MC on convergence assessment based on entropy estimation, a joint work with Pierre Vandekerkhove. He mentioned SamplerCompare which is an R package that appeared in 2010. Soon to come is their own EntropyMCMC package, using parallel simulation. And k-nearest neighbour estimation.

If I re-interpret the question as focussed on ABC algorithms, it gets both more delicate and easier. Easy because each ABC distribution is different. So there is no reason to look at the unreachable original target. Delicate because there are several parameters to calibrate (tolerance, choice of summary, …) on top of the number of MCMC simulations. In DIYABC, the outcome is always made of the superposition of several runs to check for stability (or lack thereof). But this tells us nothing about the distance to the true original target. The obvious but impractical answer is to use some basic bootstrapping, as it is generally much too costly.