**H**ere are the slides of my incoming discussion of Ron Gallant’s paper, tomorrow.

## Archive for Arnold Zellner

## reflections on the probability space induced by moment conditions with implications for Bayesian Inference [discussion]

Posted in Books, Statistics, University life with tags Arnold Zellner, empirical likelihood, fiducial distribution, measure theory, method of moments, R.A. Fisher, structural model on December 1, 2014 by xi'an*[Following my earlier reflections on Ron Gallant’s paper, here is a more condensed set of questions towards my discussion of next Friday.]*

“If one specifies a set of moment functions collected together into a vector m(x,θ) of dimension M, regards θ as random and asserts that some transformation Z(x,θ) has distribution ψ then what is required to use this information and then possibly a prior to make valid inference?” (p.4)

**T**he central question in the paper is whether or not given a set of moment equations

(where both the X_{i}‘s and θ are random), one can derive a likelihood function and a prior distribution compatible with those. It sounds to me like a highly complex question since it implies the integral equation

must have a solution for all n’s. A related question that was also remanent with fiducial distributions is how on Earth (or Middle Earth) the concept of a random theta could arise outside Bayesian analysis. And another one is how could the equations make sense outside the existence of the pair *(prior,likelihood)*. A question that may exhibit my ignorance of structural models. But which may also relate to the inconsistency of Zellner’s (1996) Bayesian method of moments as exposed by Geisser and Seidenfeld (1999).

For instance, the paper starts (why?) with the Fisherian example of the t distribution of

which is truly is a t variable when θ is fixed at the true mean value. Now, if we assume that the joint distribution of the X_{i}‘s and θ is such that this projection is a t variable, is there any other case than the Dirac mass on θ? For all (large enough) sample sizes n? I cannot tell and the paper does not bring [me] an answer either.

When I look at the analysis made in the abstraction part of the paper, I am puzzled by the starting point (17), where

since the lhs and rhs operate on different spaces. In Fisher’s example, **x** is an n-dimensional vector, while Z is unidimensional. If I apply blindly the formula on this example, the t density does not integrate against the Lebesgue measure in the n-dimension Euclidean space… If a change of measure allows for this representation, I do not see so much appeal in using this new measure and anyway wonder in which sense this defines a likelihood function, i.e. the product of n densities of the X_{i}‘s conditional on θ. To me this is the central issue, which remains unsolved by the paper.

## Regularisation

Posted in Statistics, University life with tags AIC, Arnold Zellner, Bayesian model choice, BIC, g-prior, hyper-g-prior, Lasso, model crdibility, variable selection on October 5, 2010 by xi'an**A**fter a huge delay, since the project started in 2006 and was first presented in Banff in 2007 (as well as included in the * Bayesian Core*), Gilles Celeux, Mohammed El Anbari, Jean-Michel Marin, and myself have eventually completed our paper on using hyper-g priors variable selection and regularisation in linear models . The redaction of this paper was mostly delayed due to the publication of the 2007 JASA paper by Feng Liang, Rui Paulo, German Molina, Jim Berger, and Merlise Clyde,

*Mixtures of g-priors for Bayesian variable selection*. We had indeed (independently) obtained very similar derivations based on hypergeometric function representations but, once the above paper was published, we needed to add material to our derivation and chose to run a comparison study between Bayesian and non-Bayesian methods for a series of simulated and true examples. It took a while to Mohammed El Anbari to complete this simulation study and even longer for the four of us to convene and agree on the presentation of the paper. The only difference between Liang et al.’s (2007) modelling and ours is that we do not distinguish between the intercept and the other regression coefficients in the linear model. On the one hand, this gives us one degree of freedom that allows us to pick an improper prior on the variance parameter. On the other hand, our posterior distribution is not invariant under location transforms, which was a point we heavily debated in Banff… The simulation part shows that all “standard” Bayesian solutions lead to very similar decisions and that they are much more parsimonious than regularisation techniques.

**T**wo other papers posted on arXiv today address the model choice issue. The first one by Bruce Lindsay and Jiawei Liu introduces a credibility index, and the second one by Bazerque, Mateos, and Giannakis considers group-lasso on splines for spectrum cartography.

## Hyper-g priors

Posted in Books, R, Statistics with tags Arnold Zellner, Bayesian Core, Bayesian model choice, g-prior, GLMs, ILA, MCMC on August 31, 2010 by xi'an**E**arlier this month, Daniel Sabanés Bové and Leo Held posted a paper about *g*-priors on arXiv. While I glanced at it for a few minutes, I did not have the chance to get a proper look at it till last Sunday. The *g*-prior was first introduced by the *late* Arnold Zellner for (standard) linear models, but they can be extended to generalised linear models (formalised by the *late* John Nelder) at little cost. In * Bayesian Core*, Jean-Michel Marin and I do centre the prior modelling in both linear and generalised linear models around

*g*-priors, using the naïve extension for generalised linear models,

as in the linear case. Indeed, the reasonable alternative would be to include the true information matrix but since it depends on the parameter outside the normal case this is not truly an alternative. Bové and Held propose a slightly different version

where ** W** is a diagonal weight matrix and

*c*is a family dependent scale factor evaluated at the mode

*. As in Liang et al. (2008, JASA) and most of the current literature, they also separate the intercept from the other regression coefficients. They also burn their “improperness joker” by choosing a flat prior on , which means they need to use a proper prior on*

**0***g*, again as Liang et al. (2008, JASA), for the corresponding Bayesian model comparison to be valid. In

*, we do not separate from the other regression coefficients and hence are left with one degree of freedom that we spend in choosing an improper prior on*

**Bayesian Core***g*instead. (Hence I do not get the remark of Bové and Held that our choice “prohibits Bayes factor comparisons with the null model“. As argued in

*, the factor g being an hyperparameter shared by all models, we can use the same improper prior on*

**Bayesian Core***g*in all models and hence use standard Bayes factors.) In order to achieve closed form expressions, the authors use Cui and George ‘s (2008) prior

which requires the two hyper-hyper-parameters *a* and *b* to be specified.

**T**he second part of the paper considers computational issues. It compares the ILA solution of Rue, Martino and Chopin (2009, Series B) with an MCMC solution based on an independent proposal on g resulting from linear interpolations (?). The marginal likelihoods are approximated by Chib and Jeliazkov (2001, JASA) for the MCMC part. Unsurprisingly, ILA does much better, even with a 97% acceptance rate in the MCMC algorithm.

**T**he paper is very well-written and quite informative about the existing literature. It also uses the Pima Indian dataset (The authors even dug out a 1991 paper of mine I had completely forgotten!) I am actually thinking of using the review in our revision of * Bayesian Core*, even though I think we should stick to our choice of including within the set of parameters…

## Death sequence

Posted in Books, Statistics, University life with tags Andrew Gelman, Arnold Zellner, generalised linear models, GLIM, ISBA, John Nelder, Julian Besag, Peter McCullagh, Sid Chib, Valencia conferences on August 22, 2010 by xi'an**A**ugust is not looking kindly at statisticians as I have now learned (after ten days of disconnection) of both Arnold Zellner and John Nelder passing away, on Aug. 11 and 15, respectively. Following this close the death of Julian Besag, this is a sad series of departures of leading figures in the fields of statistics and econometrics. Arnold was 83 and, although I had met him in several Valencia meetings—including one in Alicante where we sat together for breakfast with Persi Diaconis and where an irate [and well-known ] statistician came to Arnold demanding apologies about comments made late the night before!—, I only had true interactions with him during the past years, over the Jeffreys reassessment I conducted with Judith Rousseau and Nicolas Chopin. On this occasion, Arnold was very kindly helpful, pointing out the volume that he had edited on Jeffreys and that I overlooked, discussing more philosophical points about the early part of ** Theory of Probability**, and making a very nice overview of it at the O’Bayes 09 meeting. Always in the kindest manner. Sid Chib wrote an obituary of Arnold Zellner on the ISBA website (Arnold was the first ISBA president). Andrew Gelman also wrote some personal recollections about Arnold. A memorial site has been set up in his honour.

**J**ohn Nelder was regularly attending the Read Paper sessions at the RSS and these are the only times I met him. He was an impressive figure in many ways, first and foremost for his monumental ** Generalised Linear Models** with Peter McCullagh, a (difficult and uncompromising) book that I strongly recommend to (i.e. force upon!) my PhD students for its depth. I also remember being quite intimidated the first time I talked with him, failing to understand his arguments so completely that I dreaded later discussions… John Nelder was at Fisher’s Rothamsted Experimental Station for most of his career and was certainly one of the last genuine Fisherians (despite a fairly rude letter of Fisher to him!).