Should you blame your patient case-mix for poor performance in patient experience surveys?

* For full explanation of this figure, please see our paper (open access)

The story behind two recent CCHSR papers

There is an unprecedented amount of publicly reported data about what patients think of their care experience in different NHS hospitals. But not uncommonly information on hospital performance in respect of patient experience fails to influence the behaviour of clinicians and managers responsible for improving care delivery.

This is a complex problem. One of its causes is that frontline staff often ‘explain away’ bad performance, assuming that variation in performance between hospitals chiefly reflects differences in the type of patients they serve. This ‘blame the case-mix’ assumption is quite ubiquitous but not necessarily evidence based.

Stimulated by these realisations and our own experiences and previous CCHSR research we examined the potential influence of patient case-mix in the context of the Cancer Patient Experience Survey.

In a recent article in Future Oncology, we found that, against widely held assumptions, patient case-mix tends to have little influence on the performance category (‘top’, ‘middle’ and ‘bottom’) of NHS hospitals that treat cancer patients for their patient experience.

In a separate recent paper in BMJ Open we also explored some of the potential causes of poorer reported experience by patients treated by London NHS hospitals. Again, we found that patient case-mix had a small only influence on London vs. rest-of-England differences in cancer patient experience. As teaching hospitals (of which there are many in London) tend to serve cancer patients with more complex clinical needs and care pathways, we also explored the potential role of teaching hospital status, but found that this factor had a trivial only impact after adjusting for patient case-mix.

Extrapolations should be avoided: The findings relate to the UK hospital setting and to cancer patient experience – results might be different for other outcomes (not patient experience) and other healthcare settings. Both papers relate to a programme of research sponsored by Macmillan Cancer Research.

Does this mean that case-mix adjustment of publicly reported NHS hospital performance would be a waste of time? Not necessarily. We argue that both unadjusted (crude) and adjusted scores are useful and should be jointly reported. This is for two reasons.

First, there may be a few hospitals (usually small hospitals that treat particular types of patients) where case-mix adjustment makes a large enough difference to be important.

Second, and more importantly, it is critical to always try to address assumptions with evidence (although they may be proven wrong most of the time). Failing to routinely report both crude and adjusted scores maintains artificial barriers that prevent clinicians and managers from engaging with patient experience data.

See also:

  • A collection of research papers on cancer patient experience (including several CCHSR papers) here.
  • The Lancet Oncology news item covering the publication of the two papers commented in this blog
This entry was posted in Blog and tagged , . Group: . Bookmark the permalink. Both comments and trackbacks are currently closed.
  • The Cambridge Centre for Health Services Research (CCHSR) is a thriving collaboration between the University of Cambridge and RAND Europe. We aim to inform health policy and practice by conducting research and evaluation studies of organisation and delivery of healthcare, including safety, effectiveness, efficiency and patient experience.