Anyone who believes that peer-reviewed journals only ever publish rigorously-analysed research with sound, objective results would surely have been disabused of that notion by now following a recent sting exposing the flaws in the academic publishing system.

Harvard biologist Dr John Bohannon and Science carried out an undercover operation that exposed the lack of decent peer reviewing among the lower-tier open-access scientific journals by submitting a paper containing a number of deliberate flaws by a non-existent author at a made-up institution.

Worrying, of the 304 journals describing themselves as using peer review that he sent it to, 157 of them accepted it for publication.

Following Dr Bohannon’s investigation, this week’s Economist has an excellent in-depth piece on how “too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis”.

The piece goes on to say how many of the results of studies that have appeared in peer-reviewed journals aren’t able to be replicated, calling into question the veracity of the research – has the data been cherry-picked in order to give that result? Perhaps the methodology has been poor? Or maybe the researcher is deliberately and fraudulently concocting the study’s outcome, in a bid to boost their standing in the scientific community by having more papers published?

Part of the problem is a lack of rigour among the reviewers who are meant to be checking that any study is sound and up to snuff before being published, and a lack of appetite among journals for research to be replicated to ensure that the results are reliable.

As the Economist piece says,

“Journals, thirsty for novelty, show little interest in it; though minimum-threshold journals could change this, they have yet to do so in a big way.

“Most academic researchers would rather spend time on work that is more likely to enhance their careers. This is especially true of junior researchers, who are aware that overzealous replication can be seen as an implicit challenge to authority.

“Often, only people with an axe to grind pursue replications with vigour—a state of affairs which makes people wary of having their work replicated.”

As a journalist, seeing whether or not a study has appeared in a peer-reviewed journal is, for me at least, crucial in terms of deciding whether to publish it. The more prestigious the journal, naturally the greater weight the study carries, which acts as a ‘seal of approval’ to let journalists know whether it merits appearing in their publication.

But knowing that a worryingly high number of journals let so much poor science slip through the net and appear in their publications, and subsequently in the wider media, is problematic.

When I read a science story, I always check to see which journal the study has been published in, and feel reassured that the science is solid if it has appeared in a well-regarded one. But if even that can’t be relied upon, then even more ropey science is appearing in the media than I previously thought. And the wider public is absorbing this false information, and possibly making lifestyle choices based on it.

As the Economist piece points out, seemingly very few scientists are holding their hands up and making retractions when the errors are pointed out:

“Papers with fundamental flaws often live on. Some may develop a bad reputation among those in the know, who will warn colleagues. But to outsiders they will appear part of the scientific canon.”

This not only damages the reputation of science, and the public’s trust in it, but stymies its progress – everyone loses out.