About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Making room to learn from failure

Becoming comfortable with failure doesn’t mean being carefree. Risks should be taken in a managed way with plans in place to ensure any failures are ‘good’ and enable policymakers to ‘fail early, learn fast’. Below are some thoughts on how one can make room to learn from failures.

At IGL’s global conferences in Barcelona and Boston, Bas Leurs and I ran a workshop on ‘successful failures’. Due to an already packed agenda, there was not a repeat performance at IGL2019. However, as I enjoyed running this workshop (and with at least one other fan out there) I considered how to update the session. As a result of some recent conversations I thought I would share some of those ideas.

The main focus of the workshops was exploring how comfortable participants were with failures; the difference between ‘good’ and ‘bad’ failures and how to avoid the bad and learn from the good.

For the first two workshops we had drawn heavily on Amy Richardson’s spectrum of the reasons for failure and the work on building an experimental culture by Bas and his then Nesta Skills colleagues.

The main focus of the workshops was exploring how comfortable participants were with failures; the difference between ‘good’ and ‘bad’ failures and how to avoid the bad and learn from the good.

  • Bad failures are those that we should seek to avoid - for example, someone launching a policy without considering existing evidence on what works or failing to following agreed plans.
  • Good failures are those that should be encouraged - such as testing a promising new policy idea and quickly learning that the targeted SMEs are not receptive to the offer. Good failures can become bad if learning doesn’t take place and is not acted on or shared.

This year the plan for the workshop would have been to look at how policymakers can make room for those good failures.

Firstly, we would have looked at the use of experimentation funds to test assumptions about delivery and the potential impact of new ideas, and how to ‘fail early, learn fast’

IGL has advocated the use of policy experimentation funds to harvest new ideas from across the policy ecosystem and to create a mechanism to distinguish between programmes that should be scaled vs well-intentioned but ineffective efforts. We are delighted to see and support the funds emerging in this policy space following recent actions by BEIS and the European Commission.

A key feature of an experimentation fund is that it provides the resources for policymakers and practitioners to run and evaluate projects to see if they help meet a particular policy objective. The failure of supported projects could occur within two areas:

  1. Implementation - Assumptions about the ability to successfully deliver the new programme are not met - eg SMEs in the target group do not engage with the new form of support that is offered.
  2. Impact - The new programme of support is delivered as planned, but robust evaluations do not find any evidence of the positive impacts that were expected.

The room for such failures will decrease the greater the resources invested in delivering the project - people will be more understanding if they quickly learn that a £0.5 million programme fails to improve outcomes than finding the same result after £100 million has been spent. Experimentation funds can be designed to test for these sources of failure through two strands:

  1. Smaller scale ‘prototyping’ or ‘proof of concept’ experiments - Low levels of funding to test and develop policy proposals at a very early stage.
  2. Larger scale impact evaluation trials - Providing higher level of funding to conduct robust impact evaluations of proposals.

These ideas are illustrated in the diagram below.

Policy journey from start to end

Source: Adapted from Nesta Playbook for Innovation Learning

Those experiments in the first strand, with lower levels of resource commitment, would have room to explore assumptions about the design and ability to implement the new policy idea. Many of these lessons can and should be learnt before significant resources are invested - there is no need to test with 400 SMEs what could be learnt from working with 20.

In contrast, the second strand, for testing assumptions about economic impact, is likely to involve much higher levels of investment given the likely demands for sample sizes and data collection.

With greater sums invested and the objective to test the economic impact of a programme, the room for failure in the implementation of the policy is much more limited - these lessons should already have been learnt. However, if answering genuinely open questions about what works, there must be room to learn that projects do not deliver the expected benefits - the second form of failure.

Another related aspect, which although important is something I won’t discuss here, is how to create the incentives and manage the risks for those practitioners tasked with delivering the support and perhaps also to come forward with the ideas. How can they feel able to develop and deliver innovative new projects without fear of consequence for their wider activity?

Other examples of making room for failure could come from IGL’s work on grantmaking experiments, including ‘phantom experiments’ taking place behind the scenes.

Those who provide funding through competitions can be very wary of experimenting with new assessment mechanisms - with fears about potentially negative impacts on their selection or opening them to legal challenges.

However, as Teo discussed during his IGL2019 workshop (more to come on this in a blog soon) it is possible to create significant room for failures by designing experiments that do not affect business as usual until the design and implementation have been tested. This is particularly true for ‘phantom experiments’.

Funding decision

For example, to reduce the burden of applications you might want to cut the amount of information applicants provide, but which bits should you cut? You could test how different changes will affect the outcome for existing applicants by simultaneously (or even retrospectively) running process A, where judges score the standard 20 page application and the experimental process B where judges only review 4 pages.

You could then compare the outcomes and see the extent to which the funding decisions would have differed, without actually changing how decisions are made. Once questions about the design and implementation have been answered in this ‘safe environment’ a decision can then be made on whether to proceed to test this new process in the real world and see whether assumptions about expected impacts are true.

Hopefully this blog provides some ideas on how policymakers could create room for failure that would allow them to experiment with new ideas and programme designs. Who knows, perhaps we will have a workshop on this at IGL2020, where we will be back in London to celebrate five years of bringing together senior policymakers, researchers and practitioners to discuss topics such as this. Register your interest here.

Author

James Phipps

James Phipps

James Phipps

Deputy Director, Innovation Growth Lab

James is the Deputy Director for the Innovation Growth Lab (IGL) based in Nesta’s Research, Analysis and Policy team.

View profile