Reducing Waste in Research

In 2009, Chalmers and Glasziou published a study claiming that up to 80% of research is wasted. They further identified various points in the research process at which the waste occurs, including failures to ask the ‘right’ questions, use of inappropriate study designs and methods, selective publication of the results and failure to adequately report those that are published. In an attempt to address this huge issue, the Lancet recently published a series of articles reviewing techniques to reduce this wasted effort.

All of the articles in the series address the very important issue of technical efficiency, that is, how best to achieve a given objective for minimum input / cost, or equivalently to maximise the output – in this case ‘quality of research’ – for a given input.

So how can we measure the impact of increasing the technical efficiency of research? Well, firstly we need to decide what it is we want to get out of clinical research. A reasonable answer to this may be to maximise the health gain of the population, conveniently coinciding with the assumed objective of the health system itself. Of course this is a gross simplification: we are also interested in who gets any health gain (equity) and there are spill-over effects (‘externalities’) from having an active research sector such as maintenance of a highly skilled workforce and prestige. But let’s lay these to one side for the moment and assume our maximand is health.

So, by improving the technical efficiency of a particular research project, e.g. by improving the quality of the research, we increase the chances of the research yielding the ‘right’ answer (i.e. reduce bias), and that it will be published in a high quality journal, attracting attention leading to a change in clinical practice ultimately improving the health of the population.

A concept the series does not address is what is known as ‘allocative efficiency’, or how to choose a set of (technically efficient) research projects that maximises the welfare of society as a whole. Whilst Chalmers and colleagues do consider approaches to ensuring the right questions are asked, their focus is on how to generate the questions of importance to patients in the first place. Where there are more questions asked than research resources allow to be answered, it is important to decide in what order they should be tackled.

Well it turns out that there are methods where we can predict the expected return on investment in a particular project in terms of health gain. We can compare this with the foregone health gain elsewhere in the health system (represented by the cost of the research because we are diverting resources into research that could otherwise have been spent directly on delivery of health care), and use that information to choose a set of projects that maximises the expected net health gain to patients.

This approach is called Value of Information analysis and is what I hint at in a letter published in The Lancet this week. It works like this:

  1. We begin with some prior estimate about the cost-effectiveness of drug A compared with drug B. This may not be particularly informative if current information is very limited: it should be based on a systematic review of current evidence supplemented by expert opinion where harder evidence is lacking. The important thing is that we have some plausible estimate of uncertainty in the treatment effect.
  2. The decision to adopt a new treatment or not is made on the basis of cost-effectiveness. However, the uncertainty means that we may make the wrong decision: based on current information, we may conclude that drug A is preferable to drug B. If it turns out that drug B is actually the cost-effective option, we have incurred a net loss: either the patients in front of us would have done better on B, or else the opportunity cost to other patients in the system is greater than the benefit to those in front of us.
  3. The probability of being ‘wrong’ multiplied by the consequence of being wrong (the loss) is the expected loss associated with uncertainty.
  4. Given some assumptions about the nature of our data and the prior information, we can predict what the result of a new clinical trial of a certain sample size will be. Critically we can also predict the reduction in decision uncertainty when we combine the trial data with the prior information as this is a function of the sample size of the study: a bigger study gives us more information than a smaller one.
  5. Reducing the decision uncertainty reduces the probability of making the wrong decision, and so reduces the expected loss.
  6. The expected reduction in expected loss is the expected gain from the proposed trial.

I expect that all those expectations may lead to some confusion, but hopefully the logic makes sense. Comparing the expected gain with the expected cost of a number of different trials lets us calculate the (expected) return on investment from each. Ordering by the expected return provides a ranking by health gain for pound spent. Following the list exactly will maximise the expected health gain from the research budget.

A current limitation of these techniques is that they require specialist knowledge and can be time consuming to undertake. Work is currently underway streamlining these techniques, as well as exploring whether they can assist, for example, the Cochrane Collaboration in deciding on priority areas for systematic reviews. There is therefore considerable potential for value of information analysis to help inform these difficult decisions and help reduce waste in research.

 

This entry was posted in Blog and tagged , . Group: . Bookmark the permalink. Both comments and trackbacks are currently closed.
  • The Cambridge Centre for Health Services Research (CCHSR) is a thriving collaboration between the University of Cambridge and RAND Europe. We aim to inform health policy and practice by conducting research and evaluation studies of organisation and delivery of healthcare, including safety, effectiveness, efficiency and patient experience.