About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Reflections on the IGL Research Meeting

Last December we co-hosted our winter Research Meeting at the Harvard Business School, together with Professors Karim Lakhani and Rembrand Koning. Over 50 researchers were welcomed for a day packed with eight presentations of design-stage, ongoing and completed randomised controlled trials (RCTs).

The trials presented spanned the innovation, entrepreneurship and business growth field, and adopted a range of experimental methods - from testing ‘nudges’ applied to grant applicants through to impact evaluations of intensive business growth and entrepreneurial training programmes.

Whilst there remains a real dearth of trials in this policy area, they demonstrate the wide scope for using trials to identify new insights and improve policy outcomes. Two takeaways stood out to us.

High value trials can be low cost and have high impact

Trials are often thought to be expensive or complicated, but throughout the research meeting we were presented with projects that were able to deliver high value impact without either of those two encumbrances.

Trials that focused on how programme participants are communicated to, for example, are an inexpensive way to trial different strategies for maximising participation, or improving programme delivery. These simple but focused communicative tweaks can even help measure underlying motivations and programme success rates, as the early-stage research presented by Ina Ganguli showed.

Offering feedback to entrepreneurs applying to a business support programme, as presented by Rodrigo Wagner, is another example of a straightforward intervention that can potentially have a large impact.

Knowing what doesn’t work is perhaps as important as what does

Another key lesson that came out of the presentations is the importance of knowing not just when a programme works, but also when it does not achieve the impact expected.

One paper in particular, co-authored by Thomas Astebro, discussed an intervention that did not find evidence of impact. The programme, an intensive leadership training for social entrepreneurship, did not lead to significant changes in attitudes and venture progress. This was clearly not what the programme funders had hoped for, but the evidence uncovered helped them pivot - the programme will try out teaching skills rather than changing attitudes - which will be evaluated with another trial.

This is a great example of how RCTs can be used to improve interventions; to be able to do that, it is crucial that we publish, and even welcome, negative or null or null findings.

Despite these positive takeaways, the Research Meeting highlighted how the use of RCTs is still a relatively new practice in the field of innovation, entrepreneurship and growth. As a result, there are a few lessons it can borrow from colleagues in other fields where RCTs are more commonplace.

Trying to achieve too many things simultaneously can backfire

Projects may want to test multiple treatments in order to simultaneously measure and compare the impact of different interventions. In theory this seems very useful, but in reality this can lead to the trial being underpowered, because the sample size will likely be too small. In other words, if we spread a given sample across multiple trial arms we may lose the ability to draw reliable conclusions as the impact needs to be unrealistically large in order to be detected.

In addition, researchers often do not set out one specific outcome beforehand - hoping to explore different possible explanations of the results with a number of variables. However, this makes it difficult to interpret the results - as well as diminishing the credibility of the findings.

A much better approach would be to start from a clear theoretical framework, using the trial to test specific assumptions with pre-established outcome measurements. Best practice dictates that these be set out in the trial protocol, prior to the start of the intervention. Of course, this does not preclude further analysis of secondary outcomes, as long as these findings are presented as such. Indeed given the novelty of trials in this area, many new insights may come from unanticipated analysis.

Better reporting of results

Another way in which trials in innovation and entrepreneurship would benefit from embedding best practices from other fields is the reporting of results. For instance, it is currently not standard in our field to report information on the randomisation, such as which method was used, or on the statistical power of the trial (both ex-ante calculations, and achieved ex-post). This is common practice in other fields such as health, where the CONSORT guidelines are followed, and can be useful for peers to quickly and effectively gauge the quality of the research. In this respect, IGL will work to promote a common set of standards.

In the months to come, we’re hoping to dig deeper into these issues, both through blogs and through our forthcoming IGL Trials Toolkit. To be the first to hear about this work, sign up to IGL’s monthly newsletter.

Author

Teo Firpo

Teo Firpo

Teo Firpo

Senior Researcher, Innovation Growth Lab

Teo is a Senior Researcher for the Innovation Growth Lab in Nesta’s Research, Analysis and Policy unit.

View profile
Lou-Davina Stouffs

Lou-Davina Stouffs

Lou-Davina Stouffs

Senior Progamme Manager, Innovation Growth Lab

Lou-Davina was a Research and Programme Manager for the Innovation Growth Lab (IGL) in Nesta's Research, Analysis and Policy unit.

View profile