Archive for computing cost

anytime algorithm

Posted in Books, Statistics with tags , , , , , , , , , on January 11, 2017 by xi'an

Lawrence Murray, Sumeet Singh, Pierre Jacob, and Anthony Lee (Warwick) recently arXived a paper on Anytime Monte Carlo. (The earlier post on this topic is no coincidence, as Lawrence had told me about this problem when he visited Paris last Spring. Including a forced extension when his passport got stolen.) The difficulty with anytime algorithms for MCMC is the lack of exchangeability of the MCMC sequence (except for formal settings where regeneration can be used).

When accounting for duration of computation between steps of an MCMC generation, the Markov chain turns into a Markov jump process, whose stationary distribution α is biased by the average delivery time. Unless it is constant. The authors manage this difficulty by interlocking the original chain with a secondary chain so that even- and odd-index chains are independent. The secondary chain is then discarded. This provides a way to run an anytime MCMC. The principle can be extended to K+1 chains, run one after the other, since only one of those chains need be discarded. It also applies to SMC and SMC². The appeal of anytime simulation in this particle setting is that resampling is no longer a bottleneck. Hence easily distributed among processors. One aspect I do not fully understand is how the computing budget is handled, since allocating the same real time to each iteration of SMC seems to envision each target in the sequence as requiring the same amount of time. (An interesting side remark made in this paper is the lack of exchangeability resulting from elaborate resampling mechanisms, lack I had not thought of before.)

merging MCMC subposteriors

Posted in Books, Statistics, University life with tags , , , , , , , on June 8, 2016 by xi'an

Christopher Nemeth and Chris Sherlock arXived a paper yesterday about an approach to distributed MCMC sampling via Gaussian processes. As in several other papers commented on the ‘Og, the issue is to merge MCMC samples from sub-posteriors into a sample or any sort of approximation of the complete (product) posterior. I am quite sympathetic to the approach adopted in this paper, namely to use a log-Gaussian process representation of each sub-posterior and then to replace each sub-posterior with its log-Gaussian process posterior expectation in an MCMC or importance scheme. And to assess its variability through the posterior variance of the sum of log-Gaussian processes. As pointed out by the authors the closed form representation of the posterior mean of the log-posterior is invaluable as it allows for an HMC implementation. And importance solutions as well. The probabilistic numerics behind this perspective are also highly relevant.

A few arguable (?) points:

  1. The method often relies on importance sampling and hence on the choice of an importance function that is most likely influential but delicate to calibrate in complex settings as I presume the Gaussian estimates are not useful in this regard;
  2. Using Monte Carlo to approximate the value of the approximate density at a given parameter value (by simulating from the posterior distribution) is natural but is it that efficient?
  3. It could be that, by treating all sub-posterior samples as noisy versions of the same (true) posterior, a more accurate approximation of this posterior could be constructed;
  4. The method relies on the exponentiation of a posterior expectation or simulation. As of yesterday, I am somehow wary of log-normal expectations!
  5. If the purpose of the exercise is to approximate univariate integrals, it would seem more profitable to use the Gaussian processes at the univariate level;
  6. The way the normalising missing constants and the duplicate simulations are processed (or not) could deserve further exploration;
  7. Computing costs are in fine unclear when compared with the other methods in the toolbox.

analysing statistical and computational trade-off of estimation procedures

Posted in Books, pictures, Statistics, University life with tags , , , , , , on July 8, 2015 by xi'an

bostown1

“The collection of estimates may be determined by questions such as: How much storage is available? Can all the data be kept in memory or only a subset? How much processing power is available? Are there parallel or distributed systems that can be exploited?”

Daniel Sussman, Alexander Volfovsky, and Edoardo Airoldi from Harvard wrote a very interesting paper about setting a balance between statistical efficiency and computational efficiency, a theme that resonates with our recent work on ABC and older considerations about the efficiency of Monte Carlo algorithms. While the paper avoids drifting towards computer science even with a notion like algorithmic complexity, I like the introduction of a loss function in the comparison game, even though the way to combine both dimensions is unclear. And may limit the exercise to an intellectual game. In an ideal setting one would set the computational time, like “I have one hour to get this estimate”, and compare risks under that that computing constraint. Possibly dumping some observations from the sample to satisfy the constraint. Ideally. Which is why this also reminds me of ABC: given an intractable likelihood, one starts by throwing away some data precision by using a tolerance ε and usually more through an insufficient statistic. Hence ABC procedures could also be compared in such terms.

In the current paper, the authors only compare schemes of breaking the sample into bits to handle each observation only once. Meaning it cannot be used in both the empirical mean and the empirical variance. This sounds a bit contrived in that the optimum allocation depends on the value of the parameter the procedure attempts to estimate. Still, it could lead to a new form of bandit problems: given a bandit with as many arms as there are parameters, at each new observation, decide on the allocation towards minimising the overall risk. (There is a missing sentence at the end of Section 4.)

Any direction for turning those considerations into a practical decision machine would be fantastic, although the difficulties are formidable, from deciding between estimators and selecting a class of estimators, to computing costs and risks depending on unknown parameters.