mostly nuisance, little interest

tree next to my bike parking garage at INSEE, Malakoff, Feb. 02, 2012Sorry for the misleading if catchy (?) title, I mean mostly nuisance parameters, very few parameters of interest! This morning I attended a talk by Eric Lesage from CREST-ENSAI on non-responses in surveys and their modelling through instrumental variables. The weighting formula used to compensate for the missing values was exactly the one at the core of the Robins-Wasserman paradox, discussed a few weeks ago by Jamie in Varanasi. Namely the one with the estimated probability of response at the denominator: The solution adopted in the talk was obviously different, with linear estimators used at most steps to evaluate the bias of the procedure (since researchers in survey sampling seem particularly obsessed with bias!)

On a somehow related topic, Aris Spanos arXived a short note (that I read yesterday) about the Neyman-Scott paradox. The problem is similar to the Robins-Wasserman paradox in that there is an infinity of nuisance parameters (the means of the successive pairs of observations) and that a convergent estimator of the parameter of interest, namely the variance common to all observations, is available. While there exist Bayesian solutions to this problem (see, e.g., this paper by Brunero Liseo), they require some preliminary steps to bypass the difficulty of this infinite number of parameters and, in this respect, are involving ad-hocquery to some extent, because the prior is then designed purposefully so. In other words, missing the direct solution based on the difference of the pairs is a wee frustrating, even though this statistic is not sufficient! The above paper by Brunero also my favourite example in this area: when considering a normal mean in large dimension, if the parameter of interest is the squared norm of this mean, the MLE ||x||² (and the Bayes estimator associated with Jeffreys’ prior) is (are) very poor: the bias is constant and of the order of the dimension of the mean, p. On the other hand, if one starts from ||x||² as the observation (definitely in-sufficient!), the resulting MLE (and the Bayes estimator associated with Jeffreys’ prior) has (have) much nicer properties. (I mentioned this example in my review of Chang’s book as it is paradoxical, gaining in efficiency by throwing away “information”! Of course, the part we throw away does not contain true information about the norm, but the likelihood does not factorise and hence the Bayesian answers differ…)

I showed the paper to Andrew Gelman and here are his comments:

Spanos writes, “The answer is surprisingly straightforward.” I would change that to, “The answer is unsurprisingly straightforward.” He should’ve just asked me the answer first rather than wasting his time writing a paper!

The way it works is as follows. In Bayesian inference, everything unknown is unknown, they have a joint prior and a joint posterior distribution. In frequentist inference, each unknowns quantity is either a parameter or a predictive quantity. Parameters do not have probability distributions (hence the discomfort that frequentists have with notation such as N(y|m,s); they prefer something like N(y;m,s) or f_N(y;m,s)), while predictions do have probability distributions. In frequentist statistics, you estimate parameters and you predict predictors. In this world, estimation and prediction are different. Estimates are evaluated conditional on the parameter. Predictions are evaluated conditional on model parameters but unconditional on the predictive quantities. Hence, mle can work well in many high-dimensional problems, as long as you consider many of the uncertain quantities as predictive. (But mle is still not perfect because of the problem of boundary estimates, e.g., here..

6 Responses to “mostly nuisance, little interest”

  1. could you clarify the difference between N(y|m,s) and N(y;m,s)? thanks!

    • it is a matter of notation : N(y|m,s) is a conditional notation, meaning that {m,s} could be a random variable. While N(y;m,s) is an indexing notation, meaning that the normal distribution is indexed by the parameter {m,s}. In the end, it is the very same normal distribution with mean m and standard deviation s.

      • Nicole Jinn Says:

        The resulting normal distribution might be the same; but the interpretations are vastly different, depending on the notation used: the fact that {m, s} is treated as a random variable is a point that seems to have created much controversy! Specifically, treating {m, s} as a random variable then supposedly places it in the sample space instead of the parameter space, which is a distinction Aris Spanos has often emphasized. (In fact, I am currently taking a course with Spanos about the philosophical foundations of econometrics.)

      • No, (m,s) does not belong to the sample space even when it is a random variable since it cannot be “sampled”.

  2. Thanks for posting this. I’m intruiged at Spanos’ comment that “the ML method was never meant to be applied to statistical models whose number of unknown parameters increases with the sample size”; Neyman’s whole point with this example was to show that Fisher was wrong about how generally MLEs could be used – MLEs have regularity conditions, just like anything else.

    For another nice example of regularity conditions mattering for MLEs, see

    http://radfordneal.wordpress.com/2008/08/09/inconsistent-maximum-likelihood-estimation-an-ordinary-example/

    … simple Bayes methods should do better than MLEs, here.

  3. Clara Grazian has studied an ABC analysis to get an integrated likelihood in the Neyman-Scott type of problems. We will send the wpaper shortly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 634 other followers