Search Results

parallel MCMC

September 9, 2020

Yesterday, I remotely took part in the thesis defence of Balazs Nemeth, at Hasselt University, Belgium. As the pandemic conditions were alas still too uncertain to allow for travelling between France and Belgium… The thesis is about parallel strategies for speeding up MCMC, although the title is “Message passing computational methods with pharmacometrics applications”, as […]

parallelising MCMC via random forests

November 26, 2019

We have just arXived a new paper written with my former PhD student Wu Changye on the use of random forest regressions to learn the partial posteriors simulated in divide-and-conquer MCMC, when the whole data set into batches, runs MCMC algorithms separately over each batch to produce samples of parameters. Here, we use each resulting […]

deep and embarrassingly parallel MCMC

April 9, 2019

Diego Mesquita, Paul Blomstedt, and Samuel Kaski (from Helsinki, like the above picture) just arXived a paper on embarrassingly parallel MCMC. Following a series of papers discussed on this ‘og in the past. They use a deep learning approach of Dinh et al. (2017) to the computation of the probability density of a convoluted and […]

parallelizable sampling method for parameter inference of large biochemical reaction models

June 18, 2018

I came across this older (2016) arXiv paper by Jan Mikelson and Mustafa Khammash [antidated as of April 25, 2018] as another version of nested sampling. The novelty of the approach is in applying nested sampling for approximating the likelihood function in the case of involved hidden Markov models (although the name itself does not […]

parallel adaptive importance sampling

August 30, 2016

Following Paul Russell’s talk at MCqMC 2016, I took a look at his recently arXived paper. In the plane to Sydney. The pseudo-code representation of the method is identical to our population Monte Carlo algorithm as is the suggestion to approximate the posterior by a mixture, but one novel aspect is to use Reich’s ensemble […]