Science quality assurance
Retraction Watchđź”—
Why? Because retractions too often pass quietly and with vague reasons given or no reason given, like it's all a big embarrassment to involved parties, rather than an essential part of the March of Science.
The people who took part of a study early on don't learn that the study's been retracted, so they go on spreading incorrect conclusions.
Thus, Retraction Watch. Since they began their work around 2010, retractions have become lots more informative [see RW self report], and they've helped some scientific sleuths gain fame.
Related: Health News Review "Note to our followers: Our nearly 13-year run of daily publication of new content on HealthNewsReview.org came to a close at the end of 2018. Publisher Gary Schwitzer and other contributors may post new articles periodically. But all of the 6,000+ articles we have published contain lessons to help you improve your critical thinking about health care interventions. And those will be still be alive on the site for a couple of years." Who wants to guess that they got a little too famous? I'm glad they opted to shut down, for all I know they were offered "donations" for continuing, and we wouldn't have known. [ Rhetoric device: show the actual website, and point out the date on the last review, makes it more real ]
PubPeerđź”—
I'm honestly surprised that PubMed and other classic databases haven't thought of having a discussion forum for their publications. I'm glad though, because can you imagine paywalling the discussion? And giving bribeable journals the power to delete or edit critical posts?
It brings to mind you know, the bibliography app Mendeley which was bought by Elsevier. Lots of people got suspicious about why Elsevier would do that as it runs counter to their business model and it's probably not a good idea to use Mendeley now. There is an open source app Zotero which can't be bought. But the moral is that you don't let the researchers and publishers own the rights to the discussion, the discussion must be done elsewhere, in a medium created by readers for readers.
Publons
Ioannidis 2005đź”—
Review/summary, may be good prior to reading the study intensiveblog.com/ioannidis-2005-published-research-findings-false/
Some qualification replicationindex.com/2019/01/15/ioannidis-2005-was-wrong-most-published-research-findings-are-not-false/
The problem with the title www.painscience.com/articles/ioannidis.php
- Paul Meehl 1967
- Ioannidis 2005
- Uri Simonsohn
- statmodeling.stat.columbia.edu/2016/05/06/needed-an-intellectual-history-of-research-criticism-in-psychology/
Social media for science critiqueđź”—
When it comes to pointing out errors in published work, social media have been necessary. There just has been no reasonable alternative. Yes, it’s sometimes possible to publish peer-reviewed letters in journals criticizing published work, but it can be a huge amount of effort. Journals and authors often apply massive resistance to bury criticisms.
It's funny because when I was growing up, the term "peer review" made me imagine that the journals quickly correct and take down faulty studies, that scientists are fiercely critical of badly done studies. Apparently I was optimistic. Until the airing of the reproducibility crisis around 2010, publications were treated as permanently incumbent. You could write a letter with criticism, and presumably many naive optimists did, but you'd never get a response or see any change. A journal editor was recently cited as saying "I'm not quite sure what a retraction is".
What do I like about blogs compared to journal articles? First, blog space is unlimited, journal space is limited, especially in high-profile high-publicity journals such as Science, Nature, and PPNAS. Second, in a blog it’s ok to express uncertainty, in journals there’s the norm of certainty. On my blog, I was able to openly discuss various ideas of age adjustment, whereas in their journal article, Case and Deaton had nothing to say but that their numbers “are not age-adjusted within the 10-y 45-54 age group.” That’s all! I don’t blame Case and Deaton for being so terse; they were following the requirements of the journal, which is to provide minimal explanation and minimal exploration. . . . over and over again, we’re seeing journal article, or journal-article-followed-by-press-interviews, as discouraging data exploration and discouraging the expression of uncertainty. . . . The norms of peer reviewed journals such as PPNAS encourage presenting work with a facade of certainty.
— statmodeling.stat.columbia.edu/2014/11/22/blogs-twitter/#comment-252332
Not being allowed to express uncertainty is just about the most unscientific thing there is!
Let me conclude with a key disagreement I have with Fiske. She prefers moderated forums where criticism is done in private. I prefer open discussion. Personally I am not a fan of Twitter, where the space limitation seems to encourge snappy, often adversarial exchanges. I like blogs, and blog comments, because we have enough space to fully explain ourselves and to give full references to what we are discussing.
Related
- Meta-science
- Naomi Oreskes
- Transparent language