on October 06 2020 18:04:20
When I saw the graph with the huge error bars, I was skeptical. In the comments, it seems obvious to colleagues, that this is garbage. How the hell does this ever get through peer review?
This one guy has a great suggestion that should just be mandatory practice where in any way possible:
I think it harkens back to an era where academics (and, hence, peer reviewers) had substantial statistical education. Today, that's often not the case, and statistics, as a field, has developed significantly over the past decades. Unless a researcher has at least a minor in statistics, over and above the one or two statistical methods courses required of undergrads/grad students, they'd be better off anonymizing their data and handing it off to a third-party statistician to crunch the numbers. This would eliminate a TON of bias. However, that doesn't help peer reviewers that don't have a background in statistics to be able to determine what's "appropriate".
That said, studies that don't have statistically significant results are just as important to the library of human knowledge. However, the trend in academia is that such studies are "meaningless" and often don't get published because the results aren't "significant". This reveals a misunderstanding between "signficance" and "statistical significance" that REALLY needs to be sorted out, in my opinion.
|