If we want to improve the creation and use of evidence in the world of social change, we need more interdisciplinarity between funders, policy makers, researchers and deliverers, as well as humility and empathy from each camp.
I recently spoke at the Centre for Youth Impact’s Gathering on the use of evidence from a funder’s perspective. Below is a summary of that presentation, outlining the approach to evidence that Nesta has taken through the Centre for Social Action Innovation Fund, and then highlighting five things that I’ve learnt over the past year in using Nesta’s Standards of Evidence (‘the Standards’).
For the past year, I have been working on a fund that has tried to take an intelligent approach to evidence. Working with The Social Innovation Partnership (TSIP), we have designed and implemented a six-step process:
I have gone through this process with seven projects who have developed seven very different evaluations, ranging from a large scale Randomised Control Trial (RCT) to an in-depth, qualitative process evaluation. The evaluations vary so much because the activities and needs of the projects vary so much. In most cases we (Nesta, the delivery organisations and the researchers) have made good judgements as to what was appropriate and useful in terms of evaluation. In some cases we haven’t - we’re all still learning.
Lesson 1: The Standards are important
In my mind, the central contribution that the Standards have made is to highlight the importance of proving a causal link between an activity and the change that we are trying to create (Level 3 in the Standards). Whilst it may be very difficult to establish in some cases, everyone should care about this question of ‘attribution’. Everyone should want to answer the questions: “Am I making a significant positive difference?” “Would this change have happened without my contribution?”
Lesson 2: The Standards need development
There is also a lot of room for improvement. Many people (negatively) associate the Standards with a purely quantitative approach to evaluation. To address this, they should include clear guidance on what constitutes high quality qualitative research and how it can work rigorously alongside the quantitative. There are at least two key questions here: 1) How can qualitative research be used to assess the presence (or lack) of a causal link between an activity and an outcome? (It’s not just control groups that help answer this question). 2) How can qualitative research be used to add depth and colour to quantitative claims like, “Participants’ communication skills have increased by 11% compared with the control group.”
Lesson 3: The Standards could be dangerous
Whilst I am keen to see more funders and policy-makers using the Standards, this will only be a good thing if they have the knowledge, skill and judgement to do so sensibly. One obvious risk is that funders could pressure projects into carrying out inappropriate (and potentially destructive) forms of evaluation. The most common mistake that I can foresee here is to push an organisation into an expensive and high level impact evaluation before there is clarity over a project’s theory of change and before the necessary quality assurance procedures are in place.
Lesson 4: Communication is key
This is about language and proper listening. There are some fundamental and important arguments that need to be settled in this field - on the relevance and use of RCTs, for example - but resolving these arguments will require more than reason. I have seen many debates and conversations fall at the first hurdle because one party is speaking a language that the other doesn’t understand or doesn’t relate to. Worse is when people use deliberately emotive language to win a crowd. We should all aim to speak plainly and without rhetoric. We should all make a commitment to listen carefully and with sympathy to other perspectives.
Lesson 5: There are no geniuses
There are a growing number of individuals, projects and organisations trying to improve the use of evidence in social policy and practice. This mission is about developing the knowledge, skills and sense of judgement of four groups:
In my experience of these worlds to date, I have not met an individual who fully grasps the complexities of using evidence from all four perspectives and - equally important - who is able to communicate effectively between worlds. People who operate in each of these worlds have different expertise, face different challenges and use different languages.
My own personal experience of delivering and managing youth programmes has helped me to make better judgements as a grant manager at Nesta. Having gone through the pain of delivering an RCT in difficult circumstances in my previous life, for example, I have been very careful when advising/supporting organisations to consider a Level 3 evaluation for themselves. But whilst experience in two of the four worlds has helped me to use - and support others to use - evidence effectively, my lack of experience in research and policy has meant that I have also come up short, on occasions, in my understanding and practice.
There is a lot of talk about the need for delivery organisations to develop their understanding and practice when it comes to evidence. This need is just as great in all three of the other worlds.
Of course we need experts in each field, but individuals within each field all need to develop their understanding of the others as well. This will require more interdisciplinarity in all directions as well as humility and empathy from each camp. We should be willing to ask seemingly simple questions and we should be willing to learn and have our perspectives changed as a result.
Image credit: by Golden Roof