Maybe (un)true

Andrew Gelman (whose BDA 3 is definitely worth having and consulting) comments on the reactions to a COVID19-related preprint that has among the authors John P.A. Ioannidis. I am following the full story because I had included part of it in a seminar I gave recently on statistical fluctuations, reproducibility crysis, causality, and the like.  I wrote about another excerpt from the seminar when commenting on Sabine Hossenfelder’s opinion on model predictivity. I should definitely stop quoting this until I finish drafting my write-up.

In any case, you might know Ioannidis for being the author of the famous “Why Most Published Research Findings Are False”, a 2005 paper where he argued that several causes concur in making a large number of research findings non-reproducible. Ioannidis identified several causes, mostly related to conscious or unconscious biases of the researchers: sheer prejudice, incorrect application of statistical methods, biases stemming from competition between research groups in lively fields, and publishing bias. Publishing bias consists in that journals tend to publish more easily positive rather than negative results: the idea is that researchers, as a consequence, would unconsciously prefer (give less scrutiny to) a positive preliminary result rather than a negative one.

talebBack to the COVID19 preprint. The preprint sparked quite some discussion on the internet: Gelman pointed out some fatal flaws of the study, and Taleb… well, Taleb went ballistic (see photo). To spice things up, there has also apparently been a whistleblower complaint stating that the study has been funded by the JetBlue founder (who is notoriously skeptic on the COVID-19 mortality rate).

What I want to highlight of the whole affair is what Gelman says in the comment I also linked above:

[…] “peer-reviewed research” is also “provisional knowledge — maybe true, maybe not.”

That, fellow researchers, is the whole point of the science we do.