Peer reviewing: should it detect fraud?

What is the future of peer review, and should it detect fraud and misconduct were just a few of the questions tackled at a talk at the British Science Festival on Tuesday. At the session Science Fact or Science Fiction: Should Peer Review Stop Plagiarism, Bias or Fraud?, Sense About Science’s Tracey Brown revealed for the first time the preliminary results of their 2009 Peer Review Study. On hand to offer their opinions on the findings were panellists Peter Hayward from the journal Lancet Infectious Diseases and The Guardian’s environment correspondent David Adam.

Video: Tracey Brown, Peter Hayward and David Adam offer up their views on the role of peer review

More than 4,000 researchers took part in the online survey from July to August, with a full report of the findings due to be published this November. Peer review now results in 1.3 million learned articles published every year. According to Sense About Science, peer review is the front line in critical review of research, enabling other researchers to analyse or use findings, and, in turn, society at large to sift research claims.

Brown likened peer review to the jury trial system in responding to fears voiced by some critics over whether peer review is effective in detecting fraudulent research claims.

“In practice, there are miscarriages of justice and there are delays in the jury system,” she said. “But we have to be careful that we don’t jettison trial by jury just because these things sometimes happen. Likewise with peer reviewing, fraud may slip through the net from time to time, but that’s no reason to do away with the whole system. In that sense, I don’t feel that peer review is in crisis.”

According to the findings, more researchers want to improve, rather than replace, peer review. Around 84 per cent believe that without peer review, there would be no control in scientific communication, with 68 per cent agreeing that formal training for peer reviewers would improve the quality of reviews.

“It’s surprising that many people thought there should be a high degree of training,” said Brown. “It seems that people don’t necessarily know how to pitch a review. They can be too nice sometimes and don’t realise it would be more helpful to the editor if you give an honest review. So what’s the best way to help reviewers improve? Maybe the answer is if lecturers taught their post doctorate students on how to peer review.”

Brown also touched on anonymity in peer reviewing, and how it can be crucial in encouraging researchers to review. Over three quarters favoured the double blind system, where only the editor knows who the reviewers are.

“If you force someone to sign their review, you would get a massive drop off, particularly with younger researchers reviewing older colleagues. There’s obviously a discomfort there.”

So should peer review actively detect fraud? The survey revealed that 79 per cent of authors and reviewers say yes, but only 33 per cent think it is actually capable of doing so.

“What I find interesting is that there is a huge gap between people who think peer review should detect plagiarism and fraud, and those who think it actually does. People do feel there needs to be something to check fraud, but that peer review is not the way to do it. On the one hand, it seems to run counter to the idea of a community of collaboration by having an editor telling the peer reviewer to be very suspicious of any findings. But overall, around three quarters of people think peer review is doing what is sets out to do.”

Hayward, who is tasked with organising peer reviews at the Lancet, underlined the importance of reviewing to journal editors, saying: “The comments we get back from peer reviewers tell us so much about the paper in question. They tell us whether a topic is current and so on. What peer reviewers do for me as an editor is they look at the papers and then offer suggestions on how to improve it, which in turn also benefits the author.

“Obviously we face a challenge in that we don’t always know who are the best reviewers to approach. Bias is a problem. You don’t always know the relationships between peer reviewers and the author of the paper. Sometime you only find it out once the peer review has come back.

“Obviously time is an element in peer review. People want to get their papers published as soon as possible. For example in the summer, many people are away on holiday and it’s hard to find people to peer review papers.”

Adam commented on how journalists are often in the firing line when an article featuring a fraudulent claim in published.

“It’s very convenient just to say, ‘Well if it was published in Nature, if it was published in Science, it’s been peer reviewed, I’ll just use it, and if it’s wrong, you can just blame the journal,’” he said. “I think in a sense it’s a blessing and a curse for the journalist because you assume it’s been scrutinised by the journal, but yet you will also be held accountable if it’s wrong.

“There’s also commercial pressure to cover big stories which others are reporting on, even if you’re sceptical about them. So maybe PA will receive a press release and they will publish and then it’s seen by the editor and they’ll ask you to cover it too. As a journalist, you wouldn’t last very long if you replied, ‘I won’t report on that’ all the time.”

More detailed coverage of the preliminary study can be found on the Sense About Science website, where the results can also be downloaded.
Video filmed by myself at the event.

© Melanie Hall 2017