Let me start by saying that I really like Failing Grade: 89% of Introduction-to- Psychology Textbooks That Define or Explain Statistical Significance Do So Incorrectly. In this brief paper, Cassidy and colleagues examine how those who write introductory psychology textbooks define or describe statistical significance. Specifically, they look at the definition of statistical significance "most central to a reader's experience" - not all of the definitions (if there were more than one), or the only one. It's not meta-research, because it's not research on research, but it's close. It's a great piece of investigative journalism and it's on the teaching of statistics - a really important topic.
However, I want to focus here briefly on the authors describe as the "odds against chance" fallacy. That is, the seemingly incorrect assertion that "statistical significance means that the likelihood that the result is due to chance is less than 5%". This is worth focusing on because this was the most common "fallacy" - present in 80% of definitions.
I think this fallacy may, in some circumstances, rather be shorthand (a short and simple way of expressing something) designed for lay audiences; rather than a textbook writer's mistaken belief (a fallacy). I've recently realized people find technical definitions (the probability of the null given this or more extreme... etc) quite difficult. I think there is a real possibility that this definition represents an earnest attempt on the part of the textbook writers studied to communicate accessibly with statistical novices -- knowing that it is technically a bit rough around the edges. While a description like this may infuriate purists, it's possible that this what an accessible description of statistical significance looks like.
If I had been a reviewer on this paper, I might have suggested the authors reach out to the textbook writers and ask some quick questions. Why did you define it that way? Were you catering for an audience? And there's a comment in the discussion which leads me to believe the authors have an eye on this anyway. The authors say that their results "may also suggest that the odds-against-fallacy is a particularly tempting fallacy in the context of trying to communicate statistical significance to a novice audience". But is it a tempting "fallacy" (false belief) or convenient shorthand? I think this is an open question.
Moreover, if it is accessible shorthand, I think it's not that bad. Here is my defense: If the null is true, but we have observed a non-null, then this result can be said to have occurred "by chance". It's a false positive, coincidental, "chance". If (a) the p-value tells us something about how likely this or more extreme data is, if the null is true, and (b) we have a non-null and (c) our p-value is "small" then (d) it might be OK to say the likelihood it's due to chance is small (less than x%). We can talk about chance with something of a straight face. It's all a bit colloquial, it's all a bit loose, but it's not the end of the world. Is it a C+, perhaps, not an F? And hopefully students who move on from the introductory texts are taught something more concrete in due course. Hopefully.
Maybe I'm too soft, ill-informed, or missing something, but I don't think it's terrible. Anyway, all this is to say it's an interesting paper. But I think there's more to it.
No comments:
Post a Comment