The problem with peer review
22 Mar 2013 by Evoluted New Media
Criticised by some scientists for being ineffective and slow, is the supposed ‘gold standard’ of science publishing diminishing in value?
Yoshitaka Fujii currently holds the record for the most fraudulent articles of the scientific community. The Japanese anaesthesiologist, who faked 172 papers, was dismissed by the University of Toho for lacking proper ethical approval for clinical studies in several of his papers. It later emerged that there was no evidence Fujii had collected any of the data published in his later retracted articles. Like many cases of fraud, this scenario raises questions about how misconduct went undetected for so long. In a world of peer review and hard data, how can fraudulent cases of such epic proportion be possible? And potential fraud is not the only concern; peer review is supposedly a central pillar of modern science, but a growing number of scientists are speaking out about what they regard to be unacceptable flaws in the system. “The truth is that peer review as practiced in the 21st century poisons science. It is conservative, cumbersome, capricious and intrusive. It slows down the communication of new ideas and discoveries, while failing to accomplish most of what it purports to do,” Michael Eisen, a biologist at UC Berkeley writes in his blog entitled ‘it is NOT Junk.’ He is not alone with his frustrations and this had led some scientists to explore a different approach to the supposed pinnacle of scientific integrity.
As a researcher, you are unlikely to be taken seriously by members of the scientific community unless your work has appeared in a high impact peer-reviewed journal - the prime source of information of novel scientific advances. Considered an essential component of the publishing process, peer review is universally applied prior to a journal’s acceptance of a paper; a quality-control system that requires all new scientific ideas to be scrutinised by experts in the field before they can be accepted.
Peer review brings value to various groups of people. For the authors themselves, the system provides a kite mark of credibility on their research and may lead to further funding and academic jobs. “Publishing in high-impact peer-reviewed journals helps researchers’ careers as it puts a ‘quality stamp’ on their research and helps them get visibility and recognition,” Peter Harrison, Vice President Health and Medical Sciences at Elsevier told Laboratory News.
Additionally the system provides scientists with detailed advice about how they could improve their paper.
“It’s about more than career advancement. The essence of peer review is that it improves and advances the scientific field in question by experts constructively feeding back on how the research might be enhanced or improved as well as endorsing it,” states Harrison.
Charles Jennings, a former editor at Nature echoes Harrison’s statement; “At its best, the peer review system provides not only expert advice, but also a strong incentive for authors to heed the advice and to improve the paper.”1
For the rest of the scientific community, peer review provides a mechanism to prioritise which papers will be read as scientists can, if they wish, focus on top-tiered journals and assume they are reading the most influential papers of the highest quality. These papers influence their own work and methodology. For the public, peer review allows them to more accurately determine which scientific claims in the media to trust.
But as a quality-control system, peer review needs to be robust. In order to determine how accurate referees were at spotting scientific errors in articles they reviewed, British Medical Journal editor Fiona Godlee and colleagues took a paper about to be published in the journal and introduced nine major and five minor deliberate methodological errors. The doctored paper was then sent to 420 peer reviewers2. The team discovered that the median number of errors detected by the respondents was a mere two. No one managed to spot more than five deliberate errors and 16% of responders couldn’t find any mistakes at all. Bad news for peer review: this trial suggested that the process doesn’t really increase the quality of published research. Even the authors concluded: “The study paints a rather bleak picture of the effectiveness of peer review.”
And as the tale of Yoshitaka Fujii shows, peer review doesn’t work too well as a fraud-detection system. Scientific peer review was brought sharply to the public’s attention in 2005 with another notable fraud case, when it emerged that South Korean former-Professor Woo Suk Hwang had used fabricated data for his stem cell research papers. Hailed a national hero at the time of publication - the Korean government even put his face on a stamp - his research team claimed to have successfully cloned a human embryo and produced stem cells from it3 - research that promised to revolutionise health care and provide cures for a range of diseases in the future. Before this revelation, Hwang had also claimed to have cloned a cow and a pig, and introduced Snuppy4 , an Afghan hound puppy which they said was the world’s first cloned dog (interestingly, this claim turned out to be true). As the allegations of bad practice began eventually to emerge, he was forced to admit that female researchers in his laboratory had supplied the eggs for his experiments. It was after this that two scientific papers on stem cells (both published in the highly-respectable Science) were found to have been faked. After an exhausting three-year trial, Dr Woo Suk was convicted for fraud, embezzlement and breach of bioethics.
Smaller but similar scandals have also served to highlight the ineffectiveness of peer-review when scientists fake results and shine light on some deep cultural problems rooted in academia. The ‘publish or perish’ mentality invokes an intense pressure to churn out papers as frequently as possible in order to hold on to research funding and ultimately, one’s career. It is perhaps unsurprising that this leads a small minority of researchers to fabricate their findings, or fail to replicate studies where positive findings may have arisen purely by chance.
Something that even the most fervent supporters of peer review will agree on is that the process takes a long time. While methodical attention to detail is of great importance, some would argue that the process slows down advances in scientific and medical knowledge and that in this day and age of instant access to information, peer review seems dated.
“I was frustrated by the long-time delays for high-impact journals. It seemed to me that the ‘publication game’ really distracted from the science, rather than helping with the science.” Nikolaus Kriegeskorte, a researcher at MRC in Cambridge told Laboratory News.
As the scientific enterprise is based around building on the results of others, if these are languishing in reviewer’s hands or in another time-consuming round of re-submission, scientific discovery is undoubtedly lagged.
Elsevier responded to this criticism stating they were “investing time and resources into quality enhancing initiatives such as increased support and enhancement of the process to speed up review times.” These peer-review-enhancing projects, which include: an article transfer service, more choice for reviewers on the articles they review, and a review guidance program, are detailed on Elsevier Connect.5
Kriegeskorte also worries about bias in the process: “It’s when reviewers gauge their enthusiasm for high-impact publications that things necessarily get very subjective. The traditional publication system restricts the evaluation of papers by relying on just two to four pre-publication peer reviews and by keeping the reviews secret. As a result, the current system suffers from a lack of quality and transparency of the peer review process, and the only immediately available indication of a new paper’s quality is the prestige of the journal it appeared in,” he said.
So what can we do to overcome the limitations with the current system? Kriegeskorte thinks he’s come up with a viable solution. His frustration with peer review led him to start a blog6. in order to develop and share a vision for the future. Blogging among academics is becoming ever more popular with more and more researchers turning to social media to comment on papers, in some case, tearing them apart. A good example of this is the immediate criticism of the widely publicised paper that heralded bacteria that the authors’ claimed used arsenic rather than phosphorus in their DNA7. Rosie Redfield, a microbiologist at the University of British Columbia (who tried and failed to replicate the study) and other bloggers wrote about significant flaws they’d found in the study and it was later debunked. A website called ResearchBlogging now connects post-publication blog post like these about scientific literature.
Kriegeskorte believes we could take inspiration from these structures for a new approach to peer review. He says, “Designing the future system, by which we evaluate papers and decide which ones deserve broad attention and deep reading, is a great challenge of our time.”
Last year, Kriegeskorte edited an eBook entitled “Beyond open access: visions for open evaluation of scientific papers by post-publication peer review” published in Frontiers in Computational Neuroscience.8 The eBook compiles 18 peer-reviewed articles that lay out detailed visions on how a transparent, open evaluation (OE) system could work for the benefit of the scientific community and the public.
“The evaluation process will be totally transparent with peer reviews and ratings publicly posted, post-publication,” he explained.
Many would agree with this premise: “It makes much more sense in fact to publish everything and filter after the fact,” said Cameron Neylon, a senior scientist at the STFC.
It means that papers will be published instantly to an open forum platform and then publicly evaluated by the community. Kriegeskorte suggests a simple, two-step process based on a fundamental division of powers. The first step would happen after a manuscript is published online and would allow anyone who’s read the paper to publicly post a review of it or give it a rating. In the second step, independent web-portals to the literature would combine all the evaluations to give a prioritised perspective on the scientific literature.
Kriegeskorte believes that the new system would allow important papers to be evaluated more deeply within the field and more broadly (across other fields). This would ultimately result in more reliable science in his opinion. He also acknowledged that removing anonymity should result in more unbiased reviews; “Writing a high-quality review will boost the reviewer’s reputation – it’s a mini-publication. This will motivate scientists to participate and be objective. Reviews, like papers are published for eternity. Reviewers will want to stand on the right side on historical retrospect.”
An admirable idea, but will it work in practice? Some studies looking into similar systems to OE have been undertaken with less than positive findings. In 2006 Nature trialled the system, by publishing submitted papers online and allowing users to comment. They discovered that “although most authors found at least some value in the comments they received, they were small in number, and editors did not think they contributed significantly to their decisions.”
“Most papers sit in a wasteland of silence, attracting no comments whatsoever,” agrees Phil Davies, editor of The Scholarly Kitchen, a blog run by the Society for Scholarly Publishing.9
Nature’s findings suggest that OE might be further than even the most socialmedia-savvy scientists will be willing to go. It is clear that the system must be well-thought out, provide incentives for its users and ensure all discussion is properly moderated.
Harrison says: “it’s doubtful that such a system could replace peer review, but in some areas, where speed is of the utmost essence, it could prove useful.”
Elsevier are experimenting with a similar approach in one of their journals, Physics of Life Review; allowing researchers to submit a one-page comment on a review article for the journal which are then published alongside the article. The author, if he or she wishes, can write a rebuttal article.
“Since the pilot was launched in January 2010, there has been a sharp increase in usage: about 3000 downloads per month compared to 2000 per month in 2009,” said Harrison.
Peer review may still represent the gold standard of scientific publishing, but it is slow, biased and occasionally ineffective. It is becoming clear that a new approach may be necessary to overcome limitations of the current gate-keeper of scientific success. Furthermore, in an age of instant information, sharing papers with the scientific community in real time and receiving their opinions may lead to acceleration in the rate of scientific discovery, which should surely be the ultimate aim of the process. However, whether an Open Evaluation system is the best solution remains to be seen. Nikolaus Kriegeskorte acknowledges that we cannot expect such a paradigm to be accepted overnight: “Building a functioning OE system is like building an airplane. We shouldn’t expect it to be easy. But we also shouldn’t be deterred by early failures and by those who think they know that flight is impossible. It’s much more than a software engineering challenge. It’s a deep change of scientific culture that we are trying to bring about.”
Tell us what you think! Please remember this is your magazine – if you would like to comment on this topic or share your own experiences of the peer review process, please get it on touch online in the comments section or tweet us on @laboratorynews
References: 1. Jennings, C. 2006. Quality and Value: the true purpose of peer review. Nature. Doi:10.1038 /nature050322.2. Schroter et al. 2008. What errors do peer reviewers detect and does training improve their ability to detect them? JRSOC Med 101 (10): 507-5143. Hwang WS et al. (2005). "Patient-specific embryonic stem cells derived from human SCNT blastocysts". Science 308 (5729): 1777–834 Lee B. C.,et al 2005. Dogs cloned from adult somatic cells Nature, 436. 6415. http://elsevierconnect.com/peer-review-new-improvements-to-age-old-system/6. http://futureofscipub.wordpress.com7. Kriegeskorte, N et al. 2012. An emerging consensus for open evaluation: 18 visions for the future of scientific publishing. Frontiers in Computational Neuroscience8. Wolfe-Simon, F et al. 2010. "A bacterium that can grow by using arsenic instead of phosphorus". Science 332 (6034): 1163–11669. http://scholarlykitchen.sspnet.org/10. Peer Review Survey 2009 Sense About Science http://www.senseaboutscience.org/data/files/Peer_Review/Peer_Review_Survey_Final_3.pdf