The Bayesian posterior on the other hand can be quite different if you use a prior to encode the [a,b] restriction. A comparison of the Bayesian answer and the SEV answer for various endpoints [a,b] would be quite illuminating.

]]>xi’an,

The Normal distribution example usually used in Mayo’s papers has the property that SEV(mu >mu_0) is numerically equal to the Bayesian P(mu>mu_0 : data). You can see this by a simple change of varialbes in the integral used to compute it. This conguence wouldn’t be such a big deal if it weren’t for the fact that this is almost the only example ever discussed.

This identity doesn’t always hold though. For example using the H’ definied above SEV(H’) is definitly no longer equal to P(H’ : data). So maybe H’ would be a good example to illustrate the different behavior and performance of SEV and Bayesian posteriors.

]]>Uh, uh… The way the severity is explained in Spanos’ Philosophy of Science paper I am discussing here is that the severity is computed at the current value, θ₁, as it grows away from θ₀, which sounds like a Type II error to me.

]]>In Spanos (2013), I see that this probability is computed at the boundary θ₁ so how does it differ from a Type II error?

]]>xi’an,

Severity of a hypothesis such as:

H: theta>theta_0

is calculated using the boundary value theta_0 (since in the examples where I’ve seen this done, this value resulted in the largest severity). But what about the hypothesis:

H’: theta_0+10^{-1000} > theta >theta_0

I take it that the Severity of H’ would again be calculated at theta_0 for the same reason, resulting in H’ having the same Severity as H.

]]>http://errorstatistics.com/2012/10/21/mayo-section-6-statsci-and-philsci-part-2″/

Using the one-sided test T+

H₀: μ ≤ 0 vs H₁:μ>0

defined in that post, let the null be rejected whenever the sample mean is observed to be as great as, or greater than, .4 (significance level ~.03). So .4 is the fixed cut-off for rejecting the null in a standard N-P test. The power of the test against .5, POW(.5) = .7; and the power against .6 is POW(.6) = .84, etc. Compare to values of SEV for claims about positive discrepancies from 0, with the same observed sample mean, .4. They are given in this post. ]]>

Ok, I see: the notation is definitely confusing, the statement should not be inside the probability…

]]>And I repeat my answer: one computes at a point and the others follow, as with power curves (for the examples discussed). (Even though it is not a conditional but as with all error probabilities “computed under the assumption that”).

]]>Thanks: notations are a bit confused up there but what I mean is how can one condition on {θ; θ > θ₁ is false} without a prior?

]]>