Comparisons with ABC-MCMC are also reported and we give some insight (see the final Summary section) on how to exploit our data-cloning ABC for when model simulations are expensive. The g-and-k example also show how the Beaumont’s regularization allows rapid increase in the clones number. Interestingly, when cloning is “cheap” (e.g. by using a carefully vectorized code) this translate in a reduced number of (ABC)MCMC iterations which are overall less expensive than a regular ABC-MCMC. ]]>

I found your blog while browsing GoodReads.com and have enjoyed reading through your thoughtful, thorough, and honest book reviews. The description of your blog noted that you review non-fiction science books, which is thrilling to discover (and admittedly difficult to find!). I am a pharmacist by trade, author, science enthusiast, and mom to a very energetic and inquisitive analytical thinker. Given my background coupled with the drive to teach my own little one, I have created the “Think-A-Lot-Tots” collection of educational science books for babies, toddlers, and kids (pre-K & elementary school).

The goal in developing these books is to introduce scientific concepts to young readers as well as build upon their vocabulary. I am both writer and illustrator, so creating these has been especially rewarding. I currently have 3 books available on Amazon; 2 of which are books-to-be-read and are geared towards biology; 1 is a book-to-be-written-in that acts as a hands-on notebook outlining the scientific method. If you have time and are interested, would you be willing to review one of the books I have available?

All 3 titles can be found on my Amazon author page below. I am open to having you review whichever you find most interesting and would be happy to send you the PDF copy of your choice:

http://www.amazon.com/author/thomaidion

Self-published titles currently available:

Think-A-Lot-Tots: The Animal Cell

Think-A-Lot-Tots: The Neuron

Think-A-Lot-Tots: My Science Lab Notebook

Many thanks for your time and consideration!

Regards,

Thomai

]]>In the context of Minh-Ngoc Tran’s talk, having an unbiased estimator of the log-likelihood (of the summary statistic) is useful in the optimisation algorithm for Variational Bayes (http://eprints.qut.edu.au/98023/8/98023.pdf).

So the quest for unbiasedness in these two papers has nothing to do with how the target of the synthetic likelihood methods approximates the actual posterior.

The hypothesis test comment is very interesting and something that we considered (and also commented on in Wood 2010). But to have sufficient power in the test for multivariate normality the value of n needs to be too large, losing the computational gains of the synthetic likelihood in the first place. I guess it is possible to at least look at normality on the marginals. Although I’m not sure how useful that might be as presumably with n large enough this test would give a small p-value most of the time as the summary statistic is very unlikely to be perfectly normally distributed. We have found that the BSL approach seems to be quite robust to some deviation away from normality but have also considered examples with summaries with very heavy tails where BSL will completely fail.

You are correct that BSL has a hidden curse of dimensionality in that the normality assumption is likely to get worse as summary statistics are added. One must be careful with the way summaries are chosen. In future research we plan to try and relax the normality assumption (and hopefully maintain some computational advantage).

Sorry for the long reply.

]]>