Retraction Watch: Nerd tabloid or societal good?

We work hard for the privilege of being scientists. We study, we do research, we struggle and we fail. We spend time in dark rooms and cold rooms and mouse rooms and we work weekends and nights and holidays. And when we’re very lucky, we publish the product of all that effort.

But stuff happens, and sometimes those published scientific papers need to be retracted. Mistakes happen – someone misidentifies a piece of data, or includes the wrong image in a panel by accident. Sometimes a remarkable finding is later shown to be simply an artifact – a consequence of time or temperature that has been misinterpreted. And things that aren’t mistakes happen – a postdoc under pressure forges the result they think their superior wants to see, or a professor facing “publish or perish” fudges numbers to achieve statistical significance.

The dogma is (or was) that science is self-correcting: research that isn’t reproducible, regardless of the cause, won’t be the basis for future progress. Within that framework, formal retraction of a flawed work is helpful, surely – I won’t start a line of experiments based on retracted work – but if the premise is truly wrong, my preliminary experiments wouldn’t pan out anyway. However, events in recent years have poked holes in that dogma, as we’ve learned that a surprisingly high percentage of published work is irreproducible, and that it can be extremely important for both the scientific community and the general public to be aware of formal retractions when they happen, a prime example being the retraction of the article that incorrectly suggested a link between vaccines and autism.

Enter, Retraction Watch.

In 2010, Ivan Oransky and Adam Marcus began blogging at Retractionwatch.com, cataloging and commenting on all manner of scientific retractions. It’s now a popular and well-trafficked blog, with over 11,000 subscribers and funding from private foundations.

I started reading the site when I was working as an editor at a scientific journal; we occasionally had reason to investigate previously published papers, and I started to understand what a challenging task and heavy responsibility that could be. Retraction Watch made me realize that our experience was far from unique. However, as time went on (and I changed jobs) I started to find many of the posts disturbing and distasteful. The site maintains a “leaderboard,” listing the researchers with the most retractions to their names. Photos of senior authors are displayed beside descriptions of their retracted articles, and the underlying tone is transparently “SHAME on you!” or occasionally gleeful, “Tee hee, we caught ya!” Aspersions are cast not just on individual researchers but even entire fields of study (e.g. a post titled ”Does Anesthesiology have a problem?”). In addition, Retraction Watch posts don’t readily allow readers to differentiate between the, “oops! The observations were artifacts” retractions and the “The patient data in the clinical study were totally fabricated” retractions; the format and tone are largely the same. Add to that the scathing comment strings below each post, and at times the site feels like a combination of some of the worst aspects of our age – instant gratification and public shaming.

But here’s the thing – I can’t stop reading it.

I admit that this is partly because Retraction Watch is the grocery store tabloid for the nerdy set. Will there be a post about that scientist I’ve always been suspicious of? Will my heroes get caught up in a scandal? Will that paper I presented in journal club get retracted? I’d hate to be the last to know.

More importantly, though, Retraction Watch does serve its stated purpose, bringing attention to issues that are extremely relevant to the scientific community. Last year, the site documented the hundreds of papers from multiple publishers that were retracted due to faked peer review, a phenomenon in which authors write positive reviews of their own papers using pseudonyms and false email addresses. The coverage was thoughtful and comprehensive, and hopefully will help make journal editors more aware of their role and responsibility in curbing this dangerous practice.

So I feel a little stuck. Though Boston drivers and Donald Trump keep trying to prove me wrong, I believe that most people have fundamentally good intentions. I think scientists become scientists because they want to understand how the world works, and maybe because they want to have some part in making it a little better. Retractionwatch irks me because it highlights and delights in (what I hope are) the minority of cases where that’s not true. There is value in this reporting, but I feel like the form and tone encourage a baseline distrust of my colleagues’ intent that goes beyond healthy skepticism of their science and their conclusions.

So for now? I’ll keep reading… but maybe with just one eye open.

 

Katy Claiborn is a research associate at the Harvard School of Public Health and previously served as the science editor at the Journal of Clinical Investigation.

One Reply to “Retraction Watch: Nerd tabloid or societal good?”

  1. Thanks for the critique; always appreciated.

    We would point out, however, that If it is sometimes difficult to differentiate retractions for fraud from those for honest error, that’s because journals’ retraction notices are frequently frustratingly opaque. Here’s why that’s a problem: http://retractionwatch.com/2012/10/01/majority-of-retractions-are-due-to-misconduct-study-confirms-opaque-notices-distort-the-scientific-record/ Such opacity is in fact one of the reasons why we launched Retraction Watch.

    And as to whether “the format and tone are largely the same” no matter what the reason for retraction, we would recommend reading our “doing the right thing” category, which now has more than 100 posts in it: http://retractionwatch.com/category/doing-the-right-thing/
    We explain more about the decision to create that here:
    http://www.labtimes.org/labtimes/ranking/dont/2013_06.lasso

    As for casting aspersions on entire fields, we’re always quick to point out — as the anesthesiology post you link to does — that such clusters can be statistical anomalies. We’re also quick to point out that retractions can be a sign of good news, namely that readers and editors are actually paying attention. More on this here, as one example: http://retractionwatch.com/2015/12/11/is-an-increase-in-retractions-good-news-maybe-suggests-new-study/ It’s the fields that ignore misconduct that we worry about.

    Thanks again for the critique.

Comments are closed.