Are systematic reviews simply adding to research waste?

The problem

A systematic review, considered to be the ‘gold standard’ in evidence synthesis, employs transparent procedures to collate and summarise all the best available current evidence to predominantly answer a biomedical or health research question in order to ensure the rapid facilitation of effective and appropriate health care. They address the inadequacies of traditional reviews in the sense that they are more rigorous and are led via a peer-reviewed protocol, thereby, if needs be, making it easier to replicate findings. Chalmers and Glasziou report that around a staggering 85% of investment in all health research is wasted due to reasons such as researchers asking the wrong question(s) or failing to publish their findings altogether. Despite systematic reviews being adopted by many, they only exacerbate this problem by including underpowered trials.

The problem is further compounded by the limited availability on the evidence of synthesising qualitative research. This is because the synthesis methodologies employed within the framework of systematic reviews were originally, and still mainly, consist of numerical methodologies such as a meta-analysis. The methodologies on synthesising qualitative research are still under development and more continue to emerge, not to mention the heated debate on whether qualitative research should be synthesised in the first place since it is not generalisable and is specific to a particular context, time and group of participants.

Why is this of major concern?

A systematic review sets out to provide a complete, exhaustive summary of the current literature relevant to a research question, free from risk of error and bias, in order to produce reliable findings so that the correct action can be taken to improve health outcomes. Moher et al found after reviewing 300 studies that not all systematic reviews are equally reliable. If these reviews are not conducted according to the pre-specified methodologies, then how can we be certain that we are making the best possible use of our scarce resources by basing decisions on the evidence arising from such reviews?

So, what is the solution?

On the synthesis of quantitative research in systematic reviews

Evidence from the literature has shown that meta-analyses of small trials are unreliable, causing inflated, often significant treatment effects. This is owing to the fact that small trials are more susceptible to publication and other selection biases, as well as misconduct as single-centre trials are less closely monitored than multicentre ones. One way to reduce this effect would be the exclusion of such trials and assuming plausible treatment effects that would be considered clinically worthwhile. Additionally, rigorously adhering to the necessary steps involved when carrying out a systematic review and constantly updating them in light of new evidence should help to minimise any questionable findings.

On the synthesis of qualitative research in systematic reviews

Regardless of the fact that the literature on this type of research is still growing and the uncertainty surrounding the range of methods one can choose from, Bearman and Dawson argue that researchers synthesising qualitative research should strive to report their: stance (justification for the chosen synthesis methodology) and transparency (clearly describing the synthesis process or providing a reference that details the process used).

As the reader of a systematic review

Hemingway and Brereton state that caution must be exercised, not only by the researcher but also by the reader, before accepting the veracity of any systematic review. The reader should meticulously assess the limitations of the review before deciding if the recommendations made should be applied in practice. Examples of questions the reader should bear in mind include: Is the topic well defined? Was the search for papers thorough? And were the overall findings assessed for their robustness in terms of the selective inclusion or exclusion of doubtful studies and the possibility of publication bias?

The Bottom Line

Systematic reviews can provide an excellent summary of the existing knowledge related to a particular research question. However, like with any piece of research, this may not be conducted properly and so therefore care needs to be taken by both the researchers and the reader when conducting and appraising one. By following the recommendations outlined above, it should be possible to reduce the contribution of systematic reviews to research waste.

This entry was posted in Blog and tagged . Group: . Bookmark the permalink. Both comments and trackbacks are currently closed.
  • The Cambridge Centre for Health Services Research (CCHSR) is a thriving collaboration between the University of Cambridge and RAND Europe. We aim to inform health policy and practice by conducting research and evaluation studies of organisation and delivery of healthcare, including safety, effectiveness, efficiency and patient experience.