MAP or mean?!

“A frequent matter of debate in Bayesian inversion is the question, which of the two principle point-estimators, the maximum-a-posteriori (MAP) or the conditional mean (CM) estimate is to be preferred.”

An interesting topic for this arXived paper by Burger and Lucka that I (also) read in the plane to Montréal, even though I do not share the concern that we should pick between those two estimators (only or at all), since what matters is the posterior distribution and the use one makes of it. I thus disagree there is any kind of a “debate concerning the choice of point estimates”. If Bayesian inference reduces to producing a point estimate, this is a regularisation technique and the Bayesian interpretation is both incidental and superfluous.

Maybe the most interesting result in the paper is that the MAP is expressed as a proper Bayes estimator! I was under the opposite impression, mostly because the folklore (and even The Bayesian Core)  have it that it corresponds to a 0-1 loss function does not hold for continuous parameter spaces and also because it seems to conflict with the results of Druihlet and Marin (BA, 2007), who point out that the MAP ultimately depends on the choice of the dominating measure. (Even though the Lebesgue measure is implicitly chosen as the default.) The authors of this arXived paper start with a distance based on the prior; called the Bregman distance. Which may be the quadratic or the entropy distance depending on the prior. Defining a loss function that is a mix of this Bregman distance and of the quadratic distance

$||K(\hat u-u)||^2+2D_\pi(\hat u,u)$

produces the MAP as the Bayes estimator. So where did the dominating measure go? In fact, nowhere: both the loss function and the resulting estimator are clearly dependent on the choice of the dominating measure… (The loss depends on the prior but this is not a drawback per se!)

3 Responses to “MAP or mean?!”

1. […] point I am making in my book and in the previous question is not original but worth repeating. For a dominating measure $text{d}mu$, the maximum entropy prior is defined […]

2. rasmusab Says:

What about using medians? If the point estimate is used more as a compact description of the posterior it is somehow harder to know what the MAP and the mean “means” when the distribution is not symetric, the median, on the other hand is always the middle…

• I am not trying to promote one or the other in a “trick or treat” spirit. It all depends on the loss or utility function that forces you to deliver a point estimate; else the whole posterior is fine. Or at the very least a confidence/credible interval/region….

This site uses Akismet to reduce spam. Learn how your comment data is processed.