A simple riddle from The Riddler on choosing between the maximum between two minima of two throws of an N-face dice, the minimum between two maxima of two throws of an N-face dice, and a single throw. Since maximin is always less than minimax, second choice is always worse than first and a stochastic domination version of the argument shows the single throw should stand in the middle.
Archive for stochastic dominance
minimax, maximin or plain
Posted in Kids, R with tags dice, minimax problem, stochastic dominance, The Riddler on June 1, 2020 by xi'anmore concentration, everywhere
Posted in R, Statistics with tags best equivariant estimator, Cauchy distribution, cross validated, ecdf, Pitman best equivariant estimator, Pitman closeness, Pitman nearness, R, smoothmest R package, stochastic dominance, uniform optimality on January 25, 2019 by xi'anAlthough it may sound like an excessive notion of optimality, one can hope at obtaining an estimator δ of a unidimensional parameter θ that is always closer to θ that any other parameter. In distribution if not almost surely, meaning the cdf of (δ-θ) is steeper than for other estimators enjoying the same cdf at zero (for instance ½ to make them all median-unbiased). When I saw this question on X validated, I thought of the Cauchy location example, where there is no uniformly optimal estimator, albeit a large collection of unbiased ones. But a simulation experiment shows that the MLE does better than the competition. At least than
three (above) four of them (since I tried the Pitman estimator via Christian Henning’s smoothmest R package). The differences to the MLE empirical cd make it clearer below (with tomato for a score correction, gold for the Pitman estimator, sienna for the 38% trimmed mean, and blue for the median):I wonder at a general theory along these lines. There is a vague similarity with Pitman nearness or closeness but without the paradoxes induced by this criterion. More in the spirit of stochastic dominance, which may be achievable for location invariant and mean unbiased estimators…