Archive for directional data

wrapped Normal distribution

Posted in Books, R, Statistics with tags , , , , , on April 14, 2020 by xi'an

One version of the wrapped Normal distribution on (0,1) is expressed as a sum of Normal distributions with means shifted by all relative integers

\psi(x;\mu,\sigma)=\sum_{k\in\mathbb Z}\varphi(x;\mu+k,\sigma)\mathbb I_{(0,1)}(x)

which, while a parameterised density, has imho no particular statistical appeal over the use of other series. It was nonetheless the centre of a series of questions on X validated in the past weeks. Curiously used as the basis of a random walk type move over the unit cube along with a uniform component. Simulating from this distribution is easily done when seeing it as an infinite mixture of truncated Normal distributions, since the weights are easily computed

\sum_{k\in\mathbb Z}\overbrace{[\Phi_\sigma(1-\mu-k)-\Phi_\sigma(-\mu-k)]}^{p_k(\mu,\sigma)}\times

\dfrac{\varphi_\sigma(x-\mu-k)\mathbb I_{(0,1)}(y)}{\Phi_\sigma(1-\mu-k)-\Phi_\sigma(-\mu-k)}

Hence coding simulations as

wrap<-function(x, mu, sig){
  ter = trunc(5*sig + 1)
  return(sum(dnorm(x + (-ter):ter, mu, sig)))}
siw = function(N=1e4,beta=.5,mu,sig){
  unz = (runif(N)<beta)
  ter = trunc(5*sig + 1)
  qrbz = diff(prbz<-pnorm(-mu + (-ter):ter, sd=sig))
  ndx = sample((-ter+1):ter,N,rep=TRUE,pr=qrbz)+ter
  z = sig*qnorm(prbz[ndx]+runif(N)*qrbz[ndx])-ndx+mu+ter+1
  return(c(runif(sum(unz)),z[!unz]))}

and checking that the harmonic mean estimator was functioning for this density, predictably since it is lower bounded on (0,1). The prolix originator of the question was also wondering at the mean of the wrapped Normal distribution, which I derived as (predictably)

\mu+\sum_{k\in\mathbb Z} kp_k(x,\mu,\sigma)

but could not simplify any further except for x=0,½,1, when it is ½. A simulated evaluation of the mean as a function of μ shows a vaguely sinusoidal pattern, also predictably periodic and unsurprisingly antisymmetric, and apparently independent of the scale parameter σ…

ASC 2012 (#1)

Posted in Statistics, Travel, University life with tags , , , , , , , , , , on July 11, 2012 by xi'an

This morning I attended Alan Gelfand talk on directional data, i.e. on the torus (0,2π), and found his modeling via wrapped normals (i.e. normal reprojected onto the unit sphere) quite interesting and raising lots of probabilistic questions. For instance, usual moments like mean and variance had no meaning in this space. The variance matrix of the underlying normal, as well of its mean, obviously matter. One thing I am wondering about is how restrictive the normal assumption is. Because of the projection, any random change to the scale of the normal vector does not impact this wrapped normal distribution but there are certainly features that are not covered by this family. For instance, I suspect it can only offer at most two modes over the range (0,2π). And that it cannot be explosive at any point.

The keynote lecture this afternoon was delivered by Roderick Little in a highly entertaining way, about calibrated Bayesian inference in official statistics. For instance, he mentioned the inferential “schizophrenia” in this field due to the between design-based and model-based inferences. Although he did not define what he meant by “calibrated Bayesian” in the most explicit manner, he had this nice list of eight good reasons to be Bayesian (that came close to my own list at the end of the Bayesian Choice):

  1. conceptual simplicity (Bayes is prescriptive, frequentism is not), “having a model is an advantage!”
  2. avoiding ancillarity angst (Bayes conditions on everything)
  3. avoiding confidence cons (confidence is not probability)
  4. nails nuisance parameters (frequentists are either wrong or have a really hard time)
  5. escapes from asymptotia
  6. incorporates prior information and if not weak priors work fine
  7. Bayes is useful (25 of the top 30 cited are statisticians out of which … are Bayesians)
  8. Bayesians go to Valencia! [joke! Actually it should have been Bayesian go MCMskiing!]
  9. Calibrated Bayes gets better frequentists answers

He however insisted that frequentists should be Bayesians and also that Bayesians should be frequentists, hence the calibration qualification.

After an interesting session on Bayesian statistics, with (adaptive or not) mixtures and variational Bayes tools, I actually joined the “young statistician dinner” (without any pretense at being a young statistician, obviously) and had interesting exchanges on a whole variety of topics, esp. as Kerrie Mengersen adopted (reinvented) my dinner table switch strategy (w/o my R simulated annealing code). Until jetlag caught up with me.