Scott Adams, creator of Dilbert, iconoclast, and out-of-the-box social and political thinker, has a daily podcast which has become my top daily listen.
Recently, he has expressed the growing distrust of science, or at least scientists and science popularizers, who have lost public trust through the debacle of hiding the origins of the Covid virus, initial promises of the safety and effectiveness of the vaccines (all the while brainwashing the public through the repetition of “safe and effective”).
And even before that, climate alarmism and its repeated failed prophecies, the demonization of fossil fuels, and the impractical rush towards renewable energy have made people realize that modern science is severely corrupted by money and politics.
With that background, Adams has suggested a few steps towards evaluating scientific claims in the media. These are basic but important.
1. Cut your odds by 2
- 50% of scientific reports cannot be reproduced or are contradicted by later data. So already there’s a one out of two chance that what you’re reading is not true 1 2
2. Look for good research design
- Was it a double-blind and well controlled study or just a survey where people self-reported? If it’s the latter, it is even less likely to be accurate.
3. Sample size and diversity
- How well was the target population selected? How many subjects, and how many different types of subjects? Was it just one specific demographic? The less diverse the population the less likely the results are broadly applicable. 3
4. Compare to your own experience
- How does it match your own observations? If it doesn’t, it may be even less true.
5. Who benefits?
- Are there any corporate, governmental or environmental organizations that would want this report to be true? Follow the money. Are they associated with the research? If so, it’s even less likely to be true.
- If the funding sources are obscure or impossible to find, your skepticism should increase as well. 4 5 6 7
- Overall non-replication estimates: Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533(7604), 452-454. This article cites estimates that as much as 50-90% of published scientific literature may be difficult or impossible to replicate.[↩]
- Cancer research replication:
Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531-533. This study found that only 11% of pre-clinical cancer studies could be replicated.[↩] - Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine, 2(8), e124. This theoretical analysis argues that due to factors like small sample sizes and publication bias, it is likely that most claimed research findings across disciplines are false.[↩]
- Bekelman, J. E., Li, Y., & Gross, C. P. (2003). Scope and impact of financial conflicts of interest in biomedical research: a systematic review. Jama, 289(4), 454-465. This systematic review found that studies sponsored by for-profit organizations were more likely to reach conclusions that were favorable to the sponsors compared to non-profit sponsored studies.[↩]
- Lundh, A., Sismondo, S., Lexchin, J., Busuioc, O. A., & Bero, L. (2012). Industry sponsorship and research outcome. Cochrane Database of Systematic Reviews, (12). This Cochrane review analyzed studies across multiple disciplines and found that industry-sponsored studies were more likely to yield pro-industry conclusions compared to non-industry sponsored studies.[↩]
- Bourke, A., Dattani, H., & Robinson, M. (2004). Defending the integrity of science. Lancet, 363(9425), 1944. This article discusses how industry funding of medical research can lead to suppression of negative findings and only publishing positive results favorable to the sponsor.[↩]
- Krimsky, S. (2013). Do financial conflicts of interest bias research?: An inquiry into the “funding effect” hypothesis. Science, Technology, & Human Values, 38(4), 566-587. This paper reviews various studies across disciplines and finds evidence that research sponsored by for-profit firms tends to yield more pro-industry conclusions.[↩]