Thanks! I was not particularly worried about the differentiability, to be completely fair!!! Non-everywhere-differentiable likelihoods however contain a sort of information that may fail to be reflected by Fisher’s information: the clearcut gap in the density at

However, I more seriously object to the point about repeated observations as having identical observations does not agree with a (Lebesgue) absolutely continuous model. This also was the core of my comments on the Valencia 6 paper. If repeated values are a possible occurence, the model should reflect this possibility.

]]>You expressed a concern about the lack of differentiability of the density at the mode. In this context, all we need to verify is that the first derivative of the density, for a regular underlying symmetric f, does exist. It is the second derivative that does not exist at the mode (see “On parameter orthogonality in symmetric and skew models”, Jones and Anaya-Izquierdo, 2011). For this reason, we have used the basic definition of the Fisher information matrix (FIM), which only involves first derivatives. Moreover, the existence of the FIM usually requires differentiability almost everywhere.

The presence of repeated observations is in fact something that has to be taken into consideration when using improper priors, given that this may destroy the existence of the posterior under some sampling models. This is an issue of practical, not just theoretical, importance.

The discussion of the paper in Bayesian Analysis is very interesting, indeed, covering prior elicitation for this sort of models (and in general), pros and cons of different “objective” priors, and different sorts of flexible models.

]]>Ah! yes! That’s really interesting!!! And should it be difficult?

For a subjective prior, Watson and Holmes gave that nifty description of the Dirichlet process as sampling in KL balls of fixed radius around a (discrete) base distribution, so you could imagine putting a prior on the parameter of the DP that controls how far you’re going [say, an exponential prior or exp(-lambda *sqrt(.))].

I wonder if there’s a way to shift this over to the reference setting…

]]>what I mean by this suggestion is that NP Bayes is rarely endowed with subjective priors that one can support. The choice of reference priors in NP settings is therefore even more relevant than in parametric models.

]]>