Research reveals the prevalence of spin in academia

The science industry has had well known issues with reproducibility of results in the past few years, but a recent paper from the University of Sydney highlights the scale of the challenge.  It reveals that over 25% of biomedical scientific papers deliberately intend to mislead or distort their findings in some way.

The authors conducted a review of 35 already published papers on the topic of ‘spin’ in biomedical research papers.  Their meta analysis revealed that over 25% of papers had some kind of spin in them, with the figure rising to a depressing 84% in studies reporting on non-randomized trials.

Types of scientific spin

The researchers identified a wide range of scientific spin, with some of the more popular methods including:

  • Making inappropriate claims about statistically non-significant results
  • Making inappropriate recommendations for clinical practice that were not supported by study results
  • Attributing causality when that was not possible
  • Selective reporting, such as emphasizing only statistically significant or subsets of data in the conclusions
  • Presenting data in a more favorable light than was warranted, for example writing overly optimistic abstracts, misleadingly describing the study design and under-reporting adverse events.

Whilst spin appeared widespread, the paper was less conclusive as to the reasoning behind such distortions.  It wasn’t clear that conflicts of interest were particularly to blame, with local issues involving the researchers themselves, the journals publishing the papers or the research methods used rather than anything more systemic.

“The contribution of research incentives and reward structures – for example financial and reputational – that rely on ‘positive’ conclusions in order to publish and garner media attention is yet to be addressed,” the authors say.  “We see an urgent need for further research to determine the institutional or cultural factors that could contribute to such a high prevalence of spin in scientific literature – and to better understand the potential impact of spin on research, clinical practice and policy.”

The authors believe that the whole ecosystem need to be more vigilant in their hunt for spin, whether editors, peer reviewers or the researchers themselves.

“The scientific academic community would benefit from the development of tools that help us effectively identify spin and ensure accurate and impartial portrayal and interpretation of results,” they conclude. “Publishing data alongside multiple interpretations of the data from multiple researchers is one way to be transparent about the occurrence of spin.”

Related

Facebooktwitterredditpinterestlinkedinmail