## evolution of correlations [award paper]

“Many researchers might have observed that the magnitude of a correlation is pretty unstable in small samples.”

**O**n the statsblog aggregator, I spotted an entry that eventually led me to this post about the best paper award for the evolution of correlation, a paper published in the *Journal of Research in Personality*. A journal not particularly well-known for its statistical methodology input. The main message of the paper is that, while the empirical correlation is highly varying for small n’s, an interval (or *corridor of stability*!) can be constructed so that a Z-transform of the correlation does not vary away from the true value by more than a chosen quantity like 0.1. And the *point of stability* is then defined as the sample size after which the trajectory of the estimate does not leave the corridor… Both corridor and point depending on the true and unknown value of the correlation parameter by the by. Which implies resorting to bootstrap to assess the distribution of this point of stability. And deduce quantiles that can be used for… For what exactly?! Setting the necessary sample size? But this requires a preliminary run to assess the possible value of the true correlation ρ. The paper concludes that “for typical research scenarios reasonable trade-offs between accuracy and confidence start to be achieved when n approaches 250”. This figure was achieved by a bootstrap study on a bivariate Gaussian population with 10⁶ datapoints, yes indeed 10⁶!, and bootstrap samples of maximal size 10³. All in all, while I am at a loss as to why the *Journal of Research in Personality* would promote the estimation of a correlation coefficient with 250 datapoints, there is nothing fundamentally wrong with the paper (!), except for this recommendation of the 250 datapoint, as the figure stems from a specific setting with particular calibrations and cannot be expected to apply in every and all cases.

Actually, the graph in the paper was the thing that first attracted my attention because it looks very much like the bootstrap examples I show my third year students to demonstrate the appeal of bootstrap. Which is not particularly useful in the current case. A quick simulation on 100 samples of size 300 showed [above] that Monte Carlo simulations produce a tighter confidence band than the one created by bootstrap, in the Gaussian case. Here is the R code:

x=matrix(rnorm(100*300),ncol=300) y=matrix(rnorm(100*300),ncol=300)*sqrt(1-.25^2)+.25*x bore=core=matrix(0,ncol=300,nrow=100) for (t in 2:300) for (i in 1:100) core[i,t]=cor(x[i,1:t],y[i,1:t]) for (i in 1:100){ bdx=sample(1:300,300,rep=TRUE) for (t in 2:300) bore[i,t]=cor(x[i,bdx[1:t]],y[i,bdx[1:t]])}

September 15, 2015 at 9:19 am

An excerpt from the abstract of another paper in this journal: ” In Study 5, we tested the link between introversion and the mountains experimentally by sending participants to a flat, open area or a secluded, wooded area. The terrain did not make people more introverted, but introverts were happier in the secluded area than in the flat/open area, which is consistent with the person–environment fit hypothesis.”

September 15, 2015 at 3:00 am

Gonna plug my answer to a Cross Validated question about sample sizes for estimating correlation coefficients, just ’cause I’m proud of it. It has R code too.

September 15, 2015 at 9:18 am

I will give you an award then!