Archive for fixed point

artificial EM

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on October 28, 2020 by xi'an

When addressing an X validated question on the use of the EM algorithm when estimating a Normal mean, my first comment was that it was inappropriate since there is no missing data structure to anchor by (right preposition?). However I then reflected upon the infinite number of ways to demarginalise the normal density into a joint density

f(x,z;μ)dz = φ(xμ)

from the (slice sampler) call to an indicator function for f(x,z;μ) to a joint Normal distribution with an arbitrary correlation. While the joint Normal representation produces a sequence converging to the MLE, the slice representation utterly fails as the indicator functions make any starting value of μ a fixed point for EM.

Incidentally, when quoting from Wikipedia on the purpose of the EM algorithm, the following passage

Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations.

struck me as confusing and possibly wrong since it seems to suggest to seek a maximum in both the parameter and the latent variables. Which does not produce the same value as the observed likelihood maximisation.

simulating hazard

Posted in Books, Kids, pictures, Statistics, Travel with tags , , , , , , , , , , , , on May 26, 2020 by xi'an

A rather straightforward X validated question that however leads to an interesting simulation question: when given the hazard function h(·), rather than the probability density f(·), how does one simulate this distribution? Mathematically h(·) identifies the probability distribution as much as f(·),

1-F(x)=\exp\left\{ \int_{-\infty}^x h(t)\,\text{d}t \right\}=\exp\{H(x)\}

which means cdf inversion could be implemented in principle. But in practice, assuming the integral is intractable, what would an exact solution look like? Including MCMC versions exploiting one fixed point representation or the other.. Since

f(x)=h(x)\,\exp\left\{ \int_{-\infty}^x h(t)\,\text{d}t \right\}

using an unbiased estimator of the exponential term in a pseudo-marginal algorithm would work. And getting an unbiased estimator of the exponential term can be done by Glynn & Rhee debiasing. But this is rather costly… Having Devroye’s book under my nose [at my home desk] should however have driven me earlier to the obvious solution to… simply open it!!! A whole section (VI.2) is indeed dedicated to simulations when the distribution is given by the hazard rate. (Which made me realise this problem is related with PDMPs in that thinning and composition tricks are common to both.) Besides the inversion method, ie X=H⁻¹(U), Devroye suggests thinning a Poisson process when h(·) is bounded by a manageable g(·). Or a generic dynamic thinning approach that converges when h(·) is non-increasing.

le Monde puzzle [#745]

Posted in R with tags , , , on October 21, 2011 by xi'an

The puzzle in Le Monde this weekend is not that clear (for a change!), so I may be confused in the following exposition:

Three card players are betting with a certain (and different) number of chips each, between 4 and 9. After each game, the looser doubles the number of chips of the winner (while the second keeps her chips). The game stops if the looser cannot pay the winner. Find the initial configuration such that, at some point in the party, all players have the same number of chips.

So, if (x1,x2,x3) is the chip configuration at time t for the ordered players, (2x1,x2,x3-x1) is the chip configuration at time (t+1). Rather than running an exhaustive search, given the limited number of possibilities, I decided to search at random, ending up with the R function

lemonde=function(chip,chap){

x=rep(-1,3)
while (min(x)<0){

  start=x=sample(chip:chap,3)
  while (length(unique(start))==1)
    start=x=sample(chip:chap,3)

  while ((min(x)>-1)&&(length(unique(x))>1)){
    x=sample(x)  #random winner
    x=c(2*x[1],x[2],x[3]-x[1])
    }}

list(start=sort(start),finish=x)
}

leading to

> lemonde(4,9)
$start
[1] 7 8 9

$finish
[1] 8 8 8

with a unique starting point. More interestingly, other configurations may have several starting points. Of course, a mathematical analysis of the problem would bring more light on the difference. Maybe the issue of Le Monde next weekend (i.e., tonight!) will be enough.

> lemonde(4,10)
$start
[1]  5  9 10

$finish
[1] 8 8 8

> lemonde(4,10)
$start
[1]  6  8 10

$finish
[1] 8 8 8

> lemonde(4,10)
$start
[1] 7 8 9

$finish
[1] 8 8 8