New guidelines on evaluating complex interventions recommend that research focus on the questions that are most useful for decision makers. This focus on the end-user of research has the potential to help implementation of complex and often controversial interventions. It is, however, not without its risks. The SSA’s Rob Calder picks out some of the implications.   

Author’s bias

I am biased towards implementation research. I spent my PhD focused on how to improve implementation of evidence-based practice. The new guidance on complex interventions is, in my view, a positive step for researching and delivering complex interventions. This kind of progress is essential if treatment is to improve in line with the best available evidence.

The world is, however, complex. And as with any change, the new guidance raises questions about who drives the research agenda, about whether research should address ideological concerns and about how to allocate resources.

Guided by which voices?

Resources spent on addressing unshakeable objections are then not spent on building an evidence base

The guidance says that “Research should answer the questions that are most useful for decision makers rather than those that can be answered with greater certainty”.

The suggestion is that, rather than just study whether something works, researchers should also look at acceptability, feasibility, and at how to overcome barriers to implementation. This will prevent wasted time evaluating interventions that work in controlled research settings, but that are then impossible to implement. Researchers should (according to the framework) focus on answering the questions that decision makers need answered. And in so doing, remove barriers to implementation.

If, however, researchers focus their efforts on answering those questions, they might end up spending a disproportionate amount of time examining public perceptions of services. Questions about acceptability, feasibility and barriers to implementation are important, but researching them could mean that resources go towards responding to unfounded concerns and not on building an evidence-base about what works. This potential for ‘getting stuck’ is probably greater for interventions that are deemed to be controversial because of their unique or innovative approach, but which are potentially life-saving.

If you take the example of drug checking services as a complex intervention, there are two ways to approach the research. The first involves researching whether those services prevent overdoses, poisonings and other drug-related harms. The second is to respond to decision makers’ concerns about whether such services cause local drug dealers to start hanging around and whether young people are encouraged to use drugs. These are two distinct research programmes. The latter risks research questions being based on the biases, prejudices and the temporary whims of worried people who have little knowledge of the area.

Who drives the research agenda?

It is a difficult balance. Research that does not engage with popular debates becomes quickly out of touch and wastes resources. Research that engages too much loses its reputation as an independent voice of reason

In fairness, decision makers’ questions rarely appear on a whim, nor are they unduly influenced by headlines in The Daily Chronic (or similar). Decision makers can feel duty bound to consider adverse side effects such as clusters of people waiting to test drugs; increases in local levels of drug use; and the upset that such provision might cause local residents.

There is, however, a danger that research then becomes unduly influenced by the popular debates and objections that can underpin these concerns. This is particularly dangerous if those debates are driven by people who cannot (or do not wish to) be persuaded. Resources spent on addressing unshakeable objections are then not spent on building an evidence base. It risks developing a defensive, rather than a pro-active, way to prioritise research.

Furthermore, if you spend time on research that persuades people that something is not as awful as they think, you then risk losing time researching how to optimise delivery. And a larger effect size can be far more persuasive than multiple studies reporting that people’s unfounded concerns are … well, unfounded.

It is a difficult balance. Research that does not engage with popular debates becomes quickly out of touch and wastes resources. Research that engages too much loses its reputation as an independent voice of reason. It’s a complex situation – if only there were a framework to guide us.

Complexity and ideologies

The situation becomes more difficult when you involve ideologies and values. Imagine an intervention (lets muster all our imagination and call it ‘Intervention A’) for which there is good evidence of effectiveness and very little evidence of adverse effects. Unfounded fears about ‘Intervention A’ can still persist – you don’t need to look far into 2021 to find unshakeable objections to well-researched health interventions. In some cases, those fears and objections can stem from beliefs, ideologies, or values rather than from a lack of evidence.

Imagine then that you have a decision maker who dislikes ‘Intervention A’ because they believe that drug use is morally wrong. Would it then be appropriate to spend time researching how to dissuade that person from their view? This raises ethical questions about whether to conduct research with the purpose of persuading people that their ideologies or values are wrong and should be changed.

Data and ideologies

It is, however, a complex and murky world. Alongside these ethical concerns are the very real and continuing harms from not using evidence to challenge ideological opposition to drug treatments. It could be argued that there is a moral duty to do so in cases where those ideologies or values are based on false assumptions, misunderstandings about people who use drugs or ‘bad data’. Indeed, it is the job of researchers to scrutinise those assumptions, address those misunderstandings and provide accurate data. The difference between developing an evidence base and persuading decision-makers of what is ‘right’ is not separated by a line, but better represented by a Venn diagramme of a several overlapping grey circles. All of which are filled-in using the same grey tones. None have labels.

It is important that research continues to ask what is right and that it responds to key stakeholders. But the notion of asking what evidence is needed to prove that someone else is wrong, whilst often necessary for implementation, warrants some caution.

One way of being cautious is to ensure that a wide range of stakeholders are involved in designing research for complex interventions. Co-production can help researchers identify the important questions, and to avoid the tempting, but fruitless arguments. The more people included when designing evaluation research, the less likely researchers are to simply respond to the unfounded worries of the few, and the more likely they are to design evaluation research based on what questions need to be answered.

The new framework is an important step towards evaluation research that grasps the complexity of implementation. The questions about the impact this has on research agendas, funding allocation and the role of research in challenging ideologies should not slow this progress. I have long considered this kind of development important, and now that it seems to be materialising, I am more cautious than ever. I am reminded of George Eliot’s words:

“And certainly, the mistakes that we male and female mortals make when we have our own way might fairly raise some wonder that we are so fond of it.”

Blog written by Rob Calder with substantial help from Natalie Davies

The opinions expressed in this post reflect the views of the author(s) and do not necessarily represent the opinions or official positions of the SSA.

The SSA does not endorse or guarantee the accuracy of the information in external sources or links and accepts no responsibility or liability for any consequences arising from the use of such information.


Share this story