## linearity, reversed

**W**hile answering a question on X validated on the posterior mean being a weighted sum of the prior mean and of the maximum likelihood estimator, when the weights do not depend on the data, which is true in conjugate natural exponential family settings, I re-read this wonderful 1979 paper of Diaconis & Ylvisaker establishing the converse, namely that when the linear combination holds, the prior need be conjugate! This holds within exponential families, but I cannot think of a reasonable case outside exponential families where the linearity holds (again with constant weights, as otherwise it always holds in dimension one, albeit with weights possibly outside [0,1]).

September 21, 2020 at 11:02 am

There is the classical BNP example: the Dirichlet-Multinomial conjugacy yields conjugacy for the Dirichlet process. From this, the posterior predictive can be written as a convex linear combination of the prior mean (aka base measure) and the empirical measure.