Improving our understanding of how interventions work

Picture

By Sarah Morgan-Trimmer

At DECIPHer, many of our studies are randomised controlled trials. These are a robust way to find out if a health improvement intervention has an effect, because we can compare the outcomes of two groups whose only difference is whether or not they receive the intervention. However, this comparison only tells us so much. We also carry out process evaluations alongside the main trial, so we have an understanding of not just whether or not something works, but how and why. Perhaps we thought one mechanism was operating but there was really something else happening. Perhaps many things are required to happen together in order for change to occur, or maybe only one of them is important and the rest are redundant. Perhaps the mechanism will work in some settings or for some types of individuals but not others.

A process evaluation should explain how the processes and outcomes of an intervention occurred, but this is not always straightforward. Guidelines on process evaluation, funded by the MRC Population Health Sciences Research Network, are forthcoming. Process evaluations are a developing area, and are increasingly informed by social science perspectives. One example of a more social science-informed approach being used in public health is ‘realist evaluation’, which examines ‘what works, for whom and in what circumstances’ (for a recent debate on this see this article by Bonell and colleagues, comments by Marchal and colleagues, and the original authors’ response).

Compared to much public health research, social science approaches tend to have a better grasp of the idea of ‘process’, a more established tradition of using theory, and engage to a greater degree with the context of interventions. Here are  some thoughts on how the social sciences can help us work out the best ways to understand how interventions work.

Practical theory for complex processes

“There’s nothing so practical as a good theory” – Kurt Lewin

A process evaluation should produce a narrative that attempts to explain the mechanisms of what happened in an intervention. However, there are many processes going on at the same time – within individuals, groups and organisations – which overlap and interact with each other, and are not always linear. Promising new areas such as complexity theory are starting to improve our understanding of the interactions between intervention processes, but this area of process evaluation is still in its infancy. When conducting an evaluation, it may be helpful  to distinguish between processes that are to do with the trial (e.g. data collection), intervention implementation (e.g. delivering activities or sessions) and the theorised mechanisms of change in the participants (the processes they undergo in changing their behaviour).

All of these processes also interact with their contexts, which are complex, dynamic and may change over time. There is a great deal of relevant social science literature in this area, for example on social contexts of health inequalities, but this is currently underused in public health research.  Resource constraints will often dictate that attention in an evaluation is given to only some of these elements of processes and contexts, and these will vary depending on the research priorities.



Theory, which summarises and explains universal processes, can be a practical tool to understand intervention processes and contexts. Logic models are a useful visual tool to articulate the theory underlying an intervention, and can provide a basis around which to structure data collection and analysis. Both theory and logic models could be used more extensively and explicitly in process evaluations to support a better understanding of different aspects of an intervention, from its local implementation to the fundamental mechanisms at work, such as social influences on dietary habits for example.  Importantly, an intervention can be used to test a pre-existing theory but it can also be used to build theory, by collecting data on phenomena where little is known.

Using theory as a means of better understanding processes takes us beyond the ‘does or doesn’t it work?’ question, or only considering a few process issues (such as retention or acceptability) in isolation from the rest of the trial data. In a recent positive development, the MRC have announced a new fund to support theory and modelling of interventions, which could form a useful foundation for process evaluations.


Picture
Mixing methods

Making sense of complex sets of processes requires a sophisticated use of mixed methods. Different data are required on the various aspects of processes and outcomes of trials, to address different questions. However, there are challenges in identifying the best ways mix methods, particularly in linking qualitative process data to quantitative outcomes data to explain causal links. This is an important function of process evaluation which is rarely addressed in the literature.

There are many rationales and ways of mixing methods; the most well-known is triangulation, where data gathered using different methods are compared to test accuracy. Another technique is complementarity, where data from different methods tell us about different parts of the intervention. Alternatively, qualitative data may be used to elaborate on quantitative data; for example, if satisfaction levels measured by a survey are found to vary, follow-up interviews could be conducted to find out why.
Such types of mixed methods may require more iterative approaches to data collection and analysis than is currently
common in trials. For example, where data contradict each other, further investigation and data analysis will be required to try to understand why this is.

Picture
Unexpected data

Complex interventions in the real world are inherently unpredictable. Trials and process evaluations are required by funders to be planned in detail; however, unexpected data often emerges, particularly from qualitative work, although this is not always reported. And as this can be perceived as a lack of foresight, expertise or control over a project, it is an uncomfortable subject.

However, the emergence of unpredicted data is an ideal opportunity for theory-building, through further data collection and analysis to follow where the data leads. This makes the best use of data emerging from a study to further our understanding of the intervention and its underlying mechanisms. Ignoring this data increases the chances of future research and interventions being irrelevant or ineffective. As with mixed methods, being open to unexpected data requires more iterative methods. This in turn may require more flexibility from funders about data collection and analysis in the final stages of a study than is currently provided: these methods cannot be planned entirely in advance.

These are significant areas in need of development if we are to improve process evaluation methodology in the field of health improvement. Understanding how complex health behaviour interventions work is a difficult undertaking, but there are ideas in this field that offer possibilities for developing our methods.


This blog is based on the following paper:

Morgan-Trimmer, S. (2013) Improving process evaluations of health behaviour interventions: learning from the social sciences. Evaluation and the Health Professions; doi: 10.1177/0163278713497363

Dr. Sarah Morgan-Trimmer is a Research Associate at DECIPHer.

Rat cartoon – source: http://www2.smumn.edu/facpages/~dbucknam/

Leave a Reply

Your email address will not be published. Required fields are marked *