Archive for heavy-tail distribution

a programming bug with weird consequences

Posted in Kids, pictures, R, Statistics, University life with tags , , , , , , on November 25, 2015 by xi'an

One student of mine coded by mistake an independent Metropolis-Hastings algorithm with too small a variance in the proposal when compared with the target variance. Here is the R code of this implementation:

#target is N(0,1)
#proposal is N(0,.01)
T=1e5
prop=x=rnorm(T,sd=.01)
ratop=dnorm(prop,log=TRUE)-dnorm(prop,sd=.01,log=TRUE)
ratav=ratop[1]
logu=ratop-log(runif(T))
for (t in 2:T){
  if (logu[t]>ratav){
    x[t]=prop[t];ratav=ratop[t]}else{x[t]=x[t-1]}
  }

It produces outputs of the following shape
smalvarwhich is quite amazing because of the small variance. The reason for the lengthy freezes of the chain is the occurrence with positive probability of realisations from the proposal with very small proposal density values, as they induce very small Metropolis-Hastings acceptance probabilities and are almost “impossible” to leave. This is due to the lack of control of the target, which is flat over the domain of the proposal for all practical purposes. Obviously, in such a setting, the outcome is unrelated with the N(0,1) target!

It is also unrelated with the normal proposal in that switching to a t distribution with 3 degrees of freedom produces a similar outcome:

It is only when using a Cauchy proposal that the pattern vanishes:

trimming poor importance samplers with Pareto scissors

Posted in Books, Statistics, University life with tags , , , , , on November 12, 2015 by xi'an

Last week A while ago, Aki Vehtari and Andrew Gelman arXived a paper on self-normalised importance sampling estimators, Pareto smoothed importance sampling. That I commented almost immediately and then sat on, waiting for the next version. Since the two A’s are still working on that revision, I eventually decided to post the comments, before a series of posts on the same issue. Disclaimer: the above quote from and picture of Pareto are unrelated with the paper!

A major drawback with importance samplers is that they can produce infinite variance estimators. Aki and Andrew compare in this study the behaviour of truncated importance weights, following a paper of Ionides (2008) Andrew and I had proposed as a student project last year, project that did not conclude. The truncation is of order √S, where S is the number of simulation, rescaled by the average weight (which should better be the median weight in the event of infinite variance weights). While this truncation leads to finite variance, it also induces a possibly far from negligible bias, bias that the paper suggests to reduce via a Pareto modelling of the largest or extreme weights. Three possible conclusions come from the Pareto modelling and the estimation of the Pareto shape k. If k<½, there is no variance issue and truncation is not necessary; if ½<k<1, the estimator has a mean but no variance, and if k>1, it does not even has a mean. The latter case sounds counter-intuitive since the self-normalised importance sampling estimator is the ratio of an estimate of a finite integral by an estimate of a positive constant… Aki and Andrew further use the Pareto estimation to smooth out the largest weights as estimated quantiles. They also eliminate the largest weights when k comes close to 1 or higher values.

On a normal toy example, simulated with too small a variance, the method is seen to reduce the variability if not the bias. In connection with my above remark, k does never appear as significantly above 1 in this example. A second toy example uses a shifted t distribution as proposal. This setting should not induce a infinite variance problem since the inverse of a t density remains integrable under a normal distribution, but the variance grows with the bias in the t proposal and the Pareto index k as well, exceeding the boundary value 1 in the end. Similar behaviour is observed on a multidimensional example.

The issue I have with this approach is the same I set to Andrew last year, namely why would one want to use a poor importance sampler and run the risk of ending up with a worthless approximation? Detecting infinite variance estimation is obviously an essential first step step to produce reliable approximation but a second step would to seek a substitute for the proposal in an automated manner, possibly by increasing the tails of the original one, or in running a reparameterisation of the original problem with the same proposal. Towards thinner tails of the target. Automated sounds unrealistic, obviously, but so does trusting an infinite variance estimate. If worse comes to worse, we should acknowledge and signal that the current sampler cannot be trusted. As in statistical settings, we should be able to state we cannot produce a satisfactory solution (and hence need more data or different models).

bias in estimating bracketed quantile contributions

Posted in Books, R, Statistics, University life with tags , , , , , , on May 16, 2014 by xi'an

“Vilfredo Pareto noticed that 80% of the land in Italy belonged to 20% of the population, and vice-versa, thus both giving birth to the power law class of distributions and the popular saying 80/20.”

Yesterday, in “one of those” coincidences, I voluntarily dropped Nassim Taleb’s The Bed of Procrustes in a suburban café as my latest contribution to the book-crossing (or bXing!) concept and spotted a newly arXived paper by Taleb and Douadi. Paper which full title is “On the Biases and Variability in the Estimation of Concentration Using Bracketed Quantile Contributions” and which central idea is that estimating

\kappa_\alpha = \alpha\mathbb{E}[X|X>q_\alpha]\big/\mathbb{E}[X]

(where qα is the α-level quantile of X) by the ratio

\sum_{i=1}^n \mathbb{I}_{X_i>\hat{q_\alpha}} X_i \big/ \sum_{i=1}^n X_i

can be strongly biased. And that the fatter the tail (i.e. the lower the power β for a power law tail), the worse the bias. This is definitely correct, if not entirely surprising given that the estimating ratio involves a ratio of estimators, plus an estimator of qα. And that both numerator and denominator have finite variances when the power β is less than 2.  The paper contains a simulation experiment easily reproduced by the following R code

#biased estimator of kappa(.01)
alpha=.01 #tail
omalpha=1-alpha
T=10^4    #simulations
n=10^3    #sample size
beta=1.1  #Pareto parameter
moobeta=-1/beta

kap=rep(0,T)
for (t in 1:T){
  sampl=runif(n)^moobeta
  quanta=quantile(sampl,omalpha)
  kap[t]=sum(sampl[sampl>quanta])/sum(sampl)
  }

What is somewhat surprising though is that the paper deems it necessary to run T=10¹² simulations to assess the bias when this bias is already visible in the first digit of κα. Given that the simulation experiment goes as high as n=10⁸, this means the authors simulated 10²⁰ Pareto variables to exhibit a bias a few thousand replicas could have produced. Checking the numerators and denominators in the above collection of ratios also shows that they may take unbelievably large values.)

“…some theories are built based on claims of such `increase’ in inequality, as in Piketti (2014), without taking into account the true nature of κ, and promulgating theories about the `variation’ of inequality without reference to the stochasticity of the estimation—and the lack of consistency of κ across time and sub-units.”

The more relevant questions about this issue of estimating κα are, in my opinion, (a) why this quantity is of enough practical importance to consider its estimation and to seek estimators that would remain robust as the power β varies arbitrarily close to 1; (b) in which sense there is anything more to the phenomenon than the difficulty in estimating β itself;  and (c) what is the efficient asymptotic variance for estimating κα (since there is no particular reason to only consider the most natural estimator). Despite the above quote, that the paper constitutes  a major refutation of Piketty’s Capital in the Twenty-First Century is rather unlikely!