Comments on: statistical significance as explained by The Economist
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/
an attempt at bloggin, nothing more...Wed, 13 Nov 2013 07:22:03 +0000
hourly
1 http://wordpress.com/
By: Kevin Kane
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/comment-page-1/#comment-39943
Wed, 13 Nov 2013 07:22:03 +0000http://xianblog.wordpress.com/?p=22216#comment-39943Is the 20 false negatives correct? If it’s 80% power, wouldn’t it be 200 out of 1000 false negatives? Confused.
]]>
By: Entsophy
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/comment-page-1/#comment-39927
Tue, 12 Nov 2013 15:41:34 +0000http://xianblog.wordpress.com/?p=22216#comment-39927[…] believed the crises in science will abate if we only educate everyone on the correct intepreation of […]
]]>
By: Mayo
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/comment-page-1/#comment-39891
Mon, 11 Nov 2013 04:24:06 +0000http://xianblog.wordpress.com/?p=22216#comment-39891My blogpost on this: http://errorstatistics.com/2013/11/09/beware-of-questionable-front-page-articles-warning-you-to-beware-of-questionable-front-page-articles-i/
]]>
By: Entsophy
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/comment-page-1/#comment-39843
Fri, 08 Nov 2013 14:28:07 +0000http://xianblog.wordpress.com/?p=22216#comment-39843I had a bit of fun at the whole “we can save classical statistics if we just emphasize ‘fail to reject’ is not the same as ‘accept’ strongly enough” here:

]]>
By: Mayo
https://xianblog.wordpress.com/2013/11/07/statistical-significance-as-explained-by-the-economist/comment-page-1/#comment-39810
Thu, 07 Nov 2013 00:12:48 +0000http://xianblog.wordpress.com/?p=22216#comment-39810Just from looking at your post, not the article, it seems to me another example of a crass use of tests as interested in buckets of nulls and alternatives (as opposed to the case at hand) .8 power doesn’t say the test confirms 80% of the true hypotheses, as they assert. It means the prob the test would correctly reject the null, under the assumption that the discrepancy from null were a given magnitude d, is .8. (I don’t know if it’s 1 or 2-sided). But never mind the paper which I don’t want to discuss out of context. Just a remark on something you say. The power doesn’t depend on a prior or knowing the alternative any more than interpreting the result of a measuring tool requires knowing the true measurement ahead of time (or its probability). I can evaluate which discrepancies a test will detect with high power (I prefer to consider P(a p-value less than observed; values of the discrepancy). If it has a high probability of producing so small a p-value (or smaller) even with underlying discrepancy no larger than d’, then the observed p-value is a poor indication of a discrepancy even larger than d’.
and, of course, Berger and Sellke’s “too early” allegation depends on a spiked prior to a null, leading to results scarcely desirable for an error statistician. But you know I’ve discussed all this elsewhere.
]]>