This publication was published more than 5 years ago. The state of knowledge may have changed.

Wanted: Peer Review

Researchers need to publish their findings in scientific journals. But the quality of their submissions is highly variable, and professional peer review often fails. Editors are warning about dubious new journals that publish indiscriminately and make the situation even worse.

Reading time approx. 6 minutes Published: Publication type:

Medical and Social Science & Practice

The SBU newsletter presents and disseminates the results of the SBU reports, describes ongoing projects at the agency, informs about assessment projects at sister organisations, and promotes interest in scientific assessments and critical reviews of methods in health care and social services.

Scientific journals are expected to check the quality of the articles they publish. A peer review process has long been a fundamental criterion for classification as a scientific journal. According to Jonas Ranstam, medical statistician and former adjunct professor at Lund University, it goes without saying that manuscripts must be closely scrutinized. After having reviewed thousands of articles, he is no stranger to major problems. Certain types are common even in randomised studies, which occasionally are regarded as beyond reproach. He has more than one story to tell.

“One typical mistake is failure to specify the primary endpoint under consideration. Researchers who are unable to verify the effect they were looking for may be tempted to focus on another endpoint instead. That is simply dishonest, misleading and totally unacceptable.”

Another drawback he frequently runs into is that a study will have too few subjects.

“One reason may be that the researchers have overblown expectations for the efficacy of the intervention. Certain results may nevertheless be statistically significant due to pure coincidence.”

Sometimes the article fails to specify the minimum size that an effect must attain to be clinically relevant.

“Failure to define a minimal important difference can create difficulties in large register studies as well. Small, inconsequential differences may show up as statistically significant.”

A similar problem is the quest for statistically significant differences, often without an underlying hypothesis.

“That can turn into a scientific wild goose chase,” Dr Ranstam says. “You end up in a multiplicity trap. The more effects you measure, the greater the risk that statistically significant differences will arise by happenstance.”

Randomised trials must also be double-blind to better avoid some common biases.

“Otherwise the results may be skewed because the measurements and interventions have been affected. Occasionally there is no way of preventing the practitioners and subjects from finding out. In such cases, at least the experts who assess the results must remain blinded.”

Even when journals engage experienced reviewers, the system is anything but fool-proof. Evaluations by SBU and others show that peer reviewers accept many studies that appear to be accurate at first glance but turn out to be unreliable upon closer reading.

While a significant percentage of submissions are rejected, particularly by reputable and oft-cited journals, many flawed articles still see the light of day. Major time and financial resources are devoted to studies that are so small, short-term and improperly designed that they do not even address the questions posed by the researchers themselves. Meanwhile, studies that could shed light on key clinical issues are conspicuous by their absence.

Additional challenges have emerged over the past few years. Special online scientific journals have spread like wildfire. Their names are similar to those of well-known journals and they invariably claim to be peer-reviewed. Many maintain high standards, but some unscrupulous ones will publish virtually anything as long as they get paid. The number of online predatory journals was estimated at 8,000 in 2015. The World Association of Editors of Peer-Reviewed Medical Journals recently highlighted the problem.

The ability of predatory journals to attract researchers is no doubt related to the “publish or perish” syndrome. The opportunity to disseminate findings quickly and evade the stern eye of an editor may be particularly appealing to inexperienced scientists. The dilemma deepens if universities and funding sources attach more significance to the number of articles published than the quality of the journal.

In spring 2017, Nature sent an application by an imaginary researcher for an editorial position to 300 scientific periodicals. One-third (120) of them were taken from a list of suspected predatory journals. The CV had been designed such that the research was clearly unqualified. The letters of acceptance started pouring in within a few hours, ultimately 40 of the journals presumed to be predatory. Eight online publications that charged fees but had been regarded as reputable fell into the trap, whereas all the established journals steered clear of it.

Even though predatory journals are at the far end of the spectrum, inadequate review processes are not at all unusual. Three years ago, Science sent a feigned biomedical research article with unmistakable flaws to 304 online scientific journals. More than half, all of which claimed to have peer review procedures, accepted the article.

Journals began using peer reviews in the eighteenth century, but it wasn’t until 200 years later that they became the rule rather than the exception. Due to its undeniable defects, the system has been highly controversial.

It has become increasingly evident that readers must be able to think critically to assess the reliability and relevance of research findings. [RL]

Some fundamental questions about studies of efficacy

Representativeness

Have the subjects been correctly selected? Do they typify the group to which they belong? Are the subjects essentially similar to the larger population for whom the findings will be used?

Research methodology

Have the subjects been assigned to either a group that receives the intervention or one that does not? Have they been randomised to the two groups? Were the two groups essentially the same at baseline? If not, have the researchers tried to compensate for the differences in a correct manner?

Findings

How intensive and long was the intervention? How large was its effect? Compared with what (a realistic alternative)? How precise was the estimate of the effect?

Protection against biased results

Did the study follow up on all subjects included at baseline? Did all subjects remain in the same study group? Were subjects, investigators and researchers blinded from knowing who received the intervention? Apart from the intervention, were the two groups handled the same in all respects? Is a systematic review of the results of similar studies available? Have other studies and research teams demonstrated similar results?

Relevance

Are the findings applicable to this context? Does the study present effects that are useful for patients and clients or are they only surrogate measures? What has been shown by systematic reviews of adverse effects? Is the probable benefit of the intervention greater than the potential harm? Do the results justify all the sacrifices that the intervention requires?

Suggested reading

  • Munafò MR, et al. A manifesto for reproducible science. Nature Human Behaviour 2017;1, artikelnr 0021.
  • Laine C, et al. Identifying Predatory or Pseudo-Journals. World Association of Editors of Peer-Reviewed Medical Journals. Publ 18 feb 2017, www.wame.org
  • Shen C, et al. ‘Predatory’ open access: a longitudinal study... BMC Med 2015;13:230.
  • Sorokowski P, et al. Predatory journals recruit fake editor. Nature 2017;543:481-3.
  • Clark J, et al. Firm action needed on predatory journals. Brit Med J 2015;350:h210.
  • Bohannon J. Who’s afraid of peer review? Science 2013;342:60–65.
Published:
Page published