Peer Review

Typical representations of scientists show them as isolated, lone crusaders staring down microscopes or drip dropping one garishly coloured liquid from one test tube into another. In reality, most science is a team effort and  the number of hours  researchers spend ‘doing’ science can be equalled by the time it takes to analyse the data collected and get the whole lot written up as a coherent study. Once the work is handed  over to a publisher  in the hope of sharing the knowledge they have accumulated, furthering a group or individual’s reputation or just because they haven’t had anything out there for a while and the pressure’s on from above, the fate of all that hard work is in somebody else’s hands.

Sir Martin Evans by David Cobley, National Portrait Gallery, London

On submission to a scientific journal, a research paper first goes to the publication’s editorial team. They ensure it fits within the scope of their particular journal, that it’s interesting and they hadn’t published one just like it an  issue or two ago. If the team approve it, ‘the peers’ are next to get their paws on the research.  Two or more, external reviewers, with a history of published studies of their own make a detailed apraisal of the work. If they think the submission is any good, with the wind in the right direction and the moon being in the most auspicious phase of its cycle, they will recommend it gets published.  The ultimate decision on whether a study is published or perishes, however, lies with the journal’s editor alone.

When a science story appears in the press, it’s handy to know if it has been published in a journal that operates a peer review system. NOT ALL DO! If, however, on inspection, the work some journalist has got so excited about he just had to spread the word, has been peer reviewed that doesn’t automatically mean it’s a good study or that its findings are valid. The key points peer reviewers look for are:

  1. Apparent validity i.e. the results bear some relation to the study’s stated aims and claims. Do the results of the study justify its conclusions? Of course, to ensure all the findings are valid would mean repeating the study all over again and that’s just not practical for reviewers to do!;
  2. Significance: that the work adds something new to or improves the understanding of  a given subject;
  3. Quality: Is the methodology suitable to find answers to the questions posed? Are there suitable controls? Is the sample size adequate? Are the methods for analysing the data suitable? Has the data been interpreted correctly…have they been over interpreted?…and a hundred and one other questions.
  4. Originality. Is this a new take on an old question? Is this a whole new question that no-one’s thought to ask before? On occasions, the quest for the novel  seems to trump all  other considerations and poor quality studies get published in otherwise good journals, often to generate publicity and thus garner new readers. When assessing a piece of work it is neccessary to ask, is it original or simply controversial?

  It’s important to remember, all journals are not created equal. There are many reputable journals who operate robust peer review but even they can fall foul of their stated aims and publish work for the sake of publicity. Perhaps the most famous example of this is when, in 1998 the British journal, the Lancet, published a study by Andrew Wakefield, that sparked the fear that the combined measles, mumps and rubella, MMR, vaccine could cause autism. In this episode of the Quackcast Podcast, host Marc Crislip, with his trademark ascerbic humour, bemoans such editorial lapses and mourns the “Good Journals Gone Bad.

The public health crisis and the on-going consequences of the decision to publish, the now retracted, Wakefield paper, such as the 2011 measles epidemic, inspired the Bristish government to examine the peer review process. Their publication, Peer Review in Scientific Publications – House of Commons Science & Technology Committee, July 2011, calls for bodies to be set up to review scientists’ work even before submission for publication. Many people, such as the journalist Brian Deer, who exposed the Wakefield fraud, think this is a good way forward. However few scientists agree. It is difficult to see how such measures could prevent a tenacious researcher bent on fraud from being discovered, some scientists regard the suggestion that fraud on the scale of Wakefield’s being endemic in science as a, “Seriously cheap shot.“(Orac, Respectful Insolence)


Feedback & Recommendations

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: