About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Creating Nesta’s blueprint for halving obesity - learnings from academia and policy

I’ve spent the past two years leading on Nesta’s blueprint for halving obesity, a key project in delivering our mission to help people live longer, healthier lives.

Whilst there’s been progress in recent years in identifying potential policies aimed at tackling obesity, the next step is to identify which are the most impactful and that can meaningfully shift the dial. 

Nesta’s blueprint for halving obesity is an online toolkit that makes plain the cost and impact of a wide range of different obesity policies. It helps to show that halving obesity is manageable without radical change or high cost – by making a number of iterative changes. 

Development of the toolkit involved drawing upon a huge evidence base and synthesising research to make accurate and useful policy recommendations.

Having come from academia to lead this project at Nesta, I’ve been reflecting on the past two years and the challenges of balancing the rigorous methodology of academia with the need for creating real world impact that underpins policy research, and the necessary difference between the two approaches. 

For both policymakers and academics working towards solving practical outcomes, there can be challenges when it comes to translating evidence into action. But our experiences creating the blueprint have lessons about what these challenges are and how they can be overcome.

Assessing the evidence at pace

A core component of the project was a colossal synthesis of evidence of the impact of dozens of interventions for obesity. When first embarking on this, my initial yardstick was how long a Cochrane standard systematic review (and meta-analyses) would take. ‘This is a 3-5 year research programme’, I thought. 

But given the scope of the work and Nesta’s role as an impact-focussed innovation agency, the method of synthesising the evidence needed to be adapted. Policy research naturally has to happen on a rapid timeframe to efficiently respond to changing political landscapes. 

So, we took lessons from more efficient methods used in academia, such as using rapid review methodology and working at a faster pace by using one researcher to screen abstracts instead of two. 

Instead of conducting our own meta-analyses, we selected a single evidence source (eg, meta-analysis or impact assessments where possible) and tested our selections with a panel of experts. With this approach, we utilised existing gold standard methodology and expertise rather than generating new evidence. A reflection moving into a policy-facing role is that working at pace is a positive thing, as we were able to deliver the toolkit at an opportune political moment while still utilising rigorous research and the collective knowledge of experts. For those doing similar work, tapping into a panel of experts as soon as possible is key. Moreover, having a 30-minute meeting with someone whose bread and butter is one specific policy area can save days or weeks of desk research.

Filter the evidence by focusing on the most important metric

There have been thousands of trials published over the last three decades about the most effective interventions for weight loss – from diets to surgery to behavioural ‘nudges’ in the food system. 

Across the thousands of papers, dozens of different outcomes have been considered. Some studies test whether an intervention leads to improved clinical outcomes over several years (eg, weight change, risk of disease) while others might test the effect on energy intake over a matter of hours. 

When different interventions for obesity are tested against different metrics, it makes it very difficult to recognise which one is more effective. 

For example, which intervention is better for the average person: restricting unhealthy food advertising, which leads to that person eating 50 fewer kcals per day – or a 12-week weight loss programme, which leads to about a 2kg weight loss after two years? 

Then add another 20 different options of intervention into the mix… Comparing which is the most impactful is basically impossible for a policymaker. 

The blueprint toolkit uses high-quality evidence produced by academics but models the impact of interventions against a single metric: the impact on national obesity rates. 

We chose this outcome because, firstly, it’s the pertinent yardstick when thinking about public health and, secondly, because many obesity related outcomes (eg, physical activity, calorie intake, food purchasing behaviour) can be modelled to map onto national obesity prevalence. Whilst assumptions are required when modelling different outcomes to the single outcome of obesity prevalence, it is much more useful to be able to compare policies against the same metric to inform evidence-based decision-making.

Estimating how delivery affects interventions

Nesta’s blueprint for halving obesity has a very particular focus: what would be the effect of a specific policy lever on the UK population as a whole?

But in my past experience working as a public health researcher in academia, my focus was very different: what is the effect of this very specific intervention on a specific outcome for a specific group of individuals?

When testing an intervention on a specific group of people in that context, beyond the effect itself, we had a high degree of control and knowledge about whether that person did or did not receive the intervention in the way it was intended.

In contrast, when building the blueprint toolkit we realised quickly that whilst some policy levers map directly onto the intervention that is tested in the academic literature (eg, the impact of a weight loss drug) others required making assumptions about how they might be delivered in reality (rather than how they were delivered as part of a scientific study). 

Take for example, a policy intervention that reflects the provision of a treatment such as: ‘expand the provision of NHS bariatric surgery… specifically by doubling the amount of people receiving surgery from 6,500 to 13,000 per year’. Estimating the impact is straightforward given the very good evidence about the per-person impact of different forms of bariatric surgery. To estimate the population-wide impact of this policy, we applied the effects to a small proportion of the population (the benefit of an extra 6,500 having surgery) and assumed no change for the remaining millions of people in the dataset. By doing this, we can be pretty confident that if this policy was enacted, then we’ve correctly predicted the outcome. 

In other cases, however, linking the correct delivery of the ‘intervention’ to the ‘policy’ was trickier. 

Take an intervention such as a ‘ban on all volume promotions on unhealthy products for medium and large food retail businesses such as restaurants, coffee shops, fast food outlets’ (such as buy-one-get-one-free offers). More assumptions need to be made when estimating the potential population impact for this policy, compared to a policy of funding more bariatric surgery. 

In the case of the blueprint toolkit, we used an analysis of Kantar World Panel data to estimate the proportion of calories purchased from food retailers (excluding home delivery). We used knowledge about the feasibility of enforcing such restrictions on all retailers and concluded that this would likely not be possible for small and micro businesses. We therefore limited our assumptions to the effect on calories purchased in large food businesses that would be subject to greater enforcement. Due to the complexity of this policy, we’re less confident that we have predicted the exact outcome, but we believe it’s better to be close than to not try at all. 

Though we’re therefore less confident about the exact outcome for an intervention that is complicated to deliver and hard to monitor, at a policy research level, we believe that having a partial approximation of the outcome is good enough if that indicates a sufficient impact towards our policy goal.

Understand the competing priorities of a policy audience

When writing an academic paper, the key audience is other academics who are highly knowledgeable in the area of interest and who are interested in the same, or very similar, outcomes. But the target audience for our toolkit – policymakers – are not just interested in obesity; they also need to consider a variety of competing outcomes too (eg, ultraprocessed food, specific macronutrient intake).wn to improve well-being. It’s just that these policies would not be sufficient to meet the level of ambition required to reduce national obesity rates.

In our case, the analysis published in the blueprint toolkit only considers population-level weight change, so policies that increase the nutritional quality of food for specific groups (for example, free school meals) or provide infrastructure to encourage physical activity are ranked relatively low in terms of their impact on obesity. 

Our very specific framing suggested these were not effective policies to pursue, but that was only true about our objective of population-level weight change. There’s actually good evidence for the positive benefits of providing universal free school meals for children – both in terms of education and diet quality. Similarly, increasing opportunities to be physically active has been shown to improve well-being. It’s just that these policies would not be sufficient to meet the level of ambition required to reduce national obesity rates.

Policymakers need to balance a number of goals and so disseminating findings requires greater nuance than if one were presenting findings to an academic audience, many of whom are most interested in a smaller number of outcomes.

Conclusion

A key learning from moving from academia to a policy-facing role is that methods are different, and that’s okay. Due to the pace of policy research, it’s not possible or necessary to work to the same rigorous standards as in academia due to the need to take advantage of opportune political moments and the ever-changing political landscape. It is important, though, to utilise gold-standard research and the output and knowledge of academics by working closely with experts to contribute to our work rather than starting from scratch.

Author

Kate Tudor

Kate Tudor

Kate Tudor

Principle Researcher, healthy life mission

Kate works as a principal researcher for the healthy life mission.

View profile