**J**ust saw this new [one page] posting on arXiv, meaning an unbiased estimate of the determinant can be derived much faster. If less reliably. This trick can be helpful for (pseudo-marginal) MCMC steps when the determinant itself is of limited interest… (The importance version is not truly needed!)

## Archive for simulation

## unbiased estimator of determinant

Posted in Books, Statistics with tags arXiv, determinant computation, pseudo-marginal MCMC, simulation, trick on July 3, 2020 by xi'an## data science [down] under the hood [webinar]

Posted in Statistics with tags approximate Bayesian inference, Bayesian Analysis, Brisbane, Chris Drovandi, computational statistics, QUT, simulation on June 21, 2020 by xi'an## sans sérif & sans chevron

Posted in Books, R, Statistics, University life with tags bicycle, book publishing, chevron, final exam, LaTeX, mathematical statistics, multiple answer test, R, R code, sans-sérif, simulation, vélo, Zoom on June 17, 2020 by xi'an{\sf df=function(x)2*pi*x-4*(x>1)*acos(1/(x+(1-x)*(x<1)))}

**A**s I was LaTeXing a remote exam for next week, including some R code questions, I came across the apparent impossibility to use < and > symbols in the sans-sérif “\sf” font… Which is a surprise, given the ubiquity of the symbols in R and my LaTeXing books over the years. Must have always used “\tt” and “\verb” then! On the side, I tried to work with the automultiplechoice LaTeX package [which should be renamed velomultiplechoice!] of Alexis Bienvenüe, which proved a bit of a challenge as the downloadable version contained a flawed file of automultiplechoice.sty! Still managed to produce a 400 question exam with random permutations of questions and potential answers. But not looking forward the 4 or 5 hours of delivering the test on Zoom…

## simulating hazard

Posted in Books, Kids, pictures, Statistics, Travel with tags cross validated, debiasing, fixed point, grounded, hazard function, homework, Luc Devroye, Non-Uniform Random Variate Generation, pseudo-marginal MCMC, random variable, simulation, thinning, unbiased MCMC on May 26, 2020 by xi'an**A** rather straightforward X validated question that however leads to an interesting simulation question: ** when given the hazard function h(·), rather than the probability density f(·), how does one simulate this distribution?** Mathematically h(·) identifies the probability distribution as much as f(·),

which means cdf inversion could be implemented in principle. But in practice, assuming the integral is intractable, what would an exact solution look like? Including MCMC versions exploiting one fixed point representation or the other.. Since

using an unbiased estimator of the exponential term in a pseudo-marginal algorithm would work. And getting an unbiased estimator of the exponential term can be done by Glynn & Rhee debiasing. But this is rather costly… Having Devroye’s book under my nose [at my home desk] should however have driven me earlier to the obvious solution to… simply open it!!! A whole section (VI.2) is indeed dedicated to simulations when the distribution is given by the hazard rate. (Which made me realise this problem is related with PDMPs in that thinning and composition tricks are common to both.) Besides the inversion method, ie X=H⁻¹(U), Devroye suggests thinning a Poisson process when h(·) is bounded by a manageable g(·). Or a generic dynamic thinning approach that converges when h(·) is non-increasing.

## one or two?

Posted in Books, Kids, R with tags dynamic programming, FiveThirtyEight, mathematical puzzle, R, random walk, simulation, The Riddler on March 12, 2020 by xi'an**A** superposition of two random walks from The Riddler:

Starting from zero, a random walk is produced by choosing moves between ±1 and ±2 at each step. If the choice between both is made towards maximising the probability of ending up positive after 100 steps, what is this probability?

Although the optimal path is not necessarily made of moves that optimise the probability of ending up positive after the remaining steps, I chose to follow a dynamic programming approach by picking between ±1 and ±2 at each step based on that probability:

bs=matrix(0,405,101) #best stategy with value i-203 at time j-1 bs[204:405,101]=1 for (t in 100:1){ tt=2*t bs[203+(-tt:tt),t]=.5*apply(cbind( bs[204+(-tt:tt),t+1]+bs[202+(-tt:tt),t+1], bs[201+(-tt:tt),t+1]+bs[205+(-tt:tt),t+1]),1,max)}

resulting in the probability

> bs[203,1] [1] 0.6403174

Just checking that a simple strategy of picking ±1 above zero and ±2 below leads to the same value

ga=rep(0,T) for(v in 1:100) ga=ga+(1+(ga<1))*sample(c(-1,1),T,rep=TRUE)

or sort of

> mean(ga>0) [1] 0.6403494

With highly similar probabilities when switching at ga<2

> mean(ga>0) [1] 0.6403183

or ga<0

> mean(ga>0) [1] 0.6403008

and too little difference to spot a significant improvement between the three boundaries.