Archive for Red State Blue State

and it only gets worse [verbatim]

Posted in Kids, Travel with tags , , , , , , , , , , , , , , , , , , , , , , , , , on October 4, 2020 by xi'an

“A basic principle of the law — and of everyday fairness — is that we apply rules with consistency, and not based on what’s convenient or advantageous in the moment. The rule of law, the legitimacy of our courts, the fundamental workings of our democracy all depend on that basic principle.” Barack Obama [on Ruth Bader Ginsburg’s replacement], 18 September

“I don’t know that [Ruth Bader Ginsberg] said that, or if that was written out by Adam Schiff, and Schumer and Pelosi,” DT, 21 September

“I don’t wear a mask like [Joe Biden]. Every time you see him, he’s got a mask. He could be speaking 200 feet away from them and he shows up with the biggest mask I’ve ever seen.” DT, 29 September

“With time it goes away. And you’ll develop like a herd mentality [sic]. It’s going to be herd developed, and that’s going to happen. That will all happen.” DT, 16 September

“If you take the blue states out, we’re at a level that I don’t think anyone in the world would be at,” DT, 17 September

“It will start getting cooler, just you watch.” DT, 14 September

“I don’t think science knows, actually.” DT, 14 September

“Because of the new and unprecedented massive amount of unsolicited ballots which will be sent to ‘voters,’ or wherever, this year, the Nov 3rd Election result may NEVER BE ACCURATELY DETERMINED, which is what some want.” DT, 17 September

“There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent.” “DT, 26 May

“I can tell you there’s [no race problem] with me. Because I have great respect for all races, everybody.” DT, 15 September

“America is fundamentally good, and has much to offer the world, because our founders recognized the existence of God-given unalienable rights and designed a durable system to protect them” M. Pompeo, July 2020

reproducibility

Posted in Books, Statistics with tags , , , , , , , , on December 1, 2015 by xi'an

WariseWhile in Warwick this week, I borrowed a recent issue (Oct. 08, 2015) of Nature from Tom Nichols and read it over diners in a maths house. Its featured topic was reproducibility, with a long initial (or introductory) article about “Fooling ourselves”, starting with an illustration from Andrew himself who had gotten a sign wrong in one of those election studies that are the basis of Red State, Blue State. While this article is not bringing radically new perspectives on the topic, there is nothing shocking about it and it even goes on mentioning Peter Green and his Royal Statistical Society President’s tribune about the Sally Clark case and Eric-Jan Wagenmakers with a collaboration with competing teams that sounded like “putting one’s head on a guillotine”. Which relates to a following “comment” on crowdsourcing research or data analysis.

I however got most interested by another comment by MacCoun and Perlmutter, where they advocate a systematic blinding of data to avoid conscious or unconscious biases. While I deem the idea quite interesting and connected with anonymisation techniques in data privacy, I find the presentation rather naïve in its goals (from a statistical perspective). Indeed, if we consider data produced by a scientific experiment towards the validation or invalidation of a scientific hypothesis, it usually stands on its own, with no other experiment of a similar kind to refer to. Add too much noise and only noise remains. Add too little and the original data remains visible. This means it is quite difficult to calibrate the blinding mechanisms in order for the blinded data to remain realistic enough to be analysed. Or to be different enough from the original data for different conclusions to be drawn. The authors suggest blinding being done by a software, by adding noise, bias, label switching, &tc. But I do not think this blinding can be done blindly, i.e., without a clear idea of what the possible models are, so that the perturbed datasets created out of the original data favour more one of the models under comparison. And are realistic for at least one of those models. Thus, some preliminary analysis of the original or of some pseudo-data from each of the proposed models is somewhat unavoidable to calibrate the blinding machinery towards realistic values. If designing a new model is part of the inferential goals, this may prove impossible… Again, I think having several analyses run in parallel with several perturbed datasets quite a good idea to detect the impact of some prior assumptions. But this requires statistically savvy programmers. And possibly informative prior distributions.