About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

What is the evidence for edtech?

We spent last week at the biggest education trade show in the world, BETT. In this final blog in our BETT series, we ask what evidence we should be expecting of edtech companies.

Every year at BETT I play a little game: I go up to stalls at random and ask how they know whether their product actually makes a difference to what or how well children learn. In the past, I have been met with blank stares or told that ‘children just love our product’. This year was different. I was impressed by how many companies could talk about how existing research had informed the design of their product and how they had been taking baseline measures and looking for changes after children had used their product. Some even talked about plans for more rigorous research.

Helped by initiatives like EDUCATE and a more savvy, cash-strapped customer, evidence in edtech is getting some traction. It may not be quite mainstream yet, other visitors to BETT were not impressed by the evidence on offer, but from what I saw, things are definitely changing (or maybe I have just learned to avoid the robots).

For an evidence geek like me, this is great news. Simply recognising that evidence should be part of the design and decision-making process is a great step forward

But behind the scenes, debate rages on the quality of the evidence being produced and used by edtech companies. The Education Endowment Foundation has set a high standard for rigorous, independent research into ‘what works’ in education. Next to this benchmark, much of the effectiveness research used by edtech companies is methodologically flawed and biased.

The debate around evidence in education technology ping pongs between two uncomfortable truths:

Improving learning is hard and it is difficult to be sure that an approach ‘works’ without very rigorous research.

It is just practically impossible to expect the thousands of edtech products out there to conduct rigorous impact evaluations of academic quality.

So if we want to make good, evidence-based decisions towards including technology in the process of teaching and learning, how do we move forward? What should be reasonably expected of an edtech business that takes evidence seriously? How can we support this increased enthusiasm for evidence and raise expectations without asking businesses to do the impossible?

Different types of evidence answer different questions

The first step is being clear on what different types of evidence tell us and what they don’t.

Rigorous experimental quantitative research, such as randomised controlled trials, are powerful tools that can indicate whether a particular approach has ‘worked’

If well-designed and accompanied by high quality qualitative research, trials like this can also reveal why an approach has worked or not, who it works for and how it works. For example, a trial of the Parent Engagement Project that uses text messages to get parents more involved in their children’s schooling, not only demonstrated positive results but also helped to understand where, why and how this sort of intervention is most useful.

By undertaking several experiments over time an evidence base builds up that, at its best, can reveal ‘design principles’ (or core components). For example analysis of the literature on computer-assisted learning shows that it is most effective when used as an in-class tool or as mandatory homework support. These design principles can help both developers and purchasers of edtech pursue strategies that are more likely to have an impact.

This is an approach that is in contrast to a more common model whereby a specific intervention from a specific organisation is tested and, if a positive effect is found, it is kite-marked as ‘effective’ and the organisation is encouraged to scale up. There are a few reasons why, for edtech in particular, this approach alone will not solve our evidence problem:

  • Most edtech innovators are businesses and businesses have a tendency to go bust. We should not put our hopes of spreading effective practice on businesses managing to navigate schools’ sales cycles.
  • Edtech is constantly evolving: as individual products are constantly being upgraded or implemented on different populations, previous experimental results lose their validity. Broader design principles are likely to have better longevity.
  • There is so much edtech: There are 600 exhibitors at BETT this year, in its 5-year history the Education Endowment Foundation has undertaken rigorous trials of 85 interventions. It is just not practical to expect that more than a few per cent of edtech companies will be able to test their approaches in the most rigorous way. This problem is not unique to edtech, for example, there are over 1,000 parenting programmes most of which will never be tested, but an important obstacle nonetheless.
  • Individual experiments are vulnerable to ‘false positives’, where an effect is found due to statistical noise rather than genuine impact on individuals. This is particularly the case for analysis of large data sets, typical of those produced by new technologies. Only repetition of trials in varied contexts can give us reassurance of genuine effects.

Design principles can come not just from experimental research but also from the vast pedagogical literature that tells us how children learn. The EDUCATE programme, led by UCL’s Institute of Education in partnership with Nesta, BESA and F6S, is helping cohorts of edtech startups to use the existing evidence as they test and adapt their products, as well as helping them to collect new evidence.

Rapid cycle testing and other lighter touch evaluation approaches are probably the most common and immediately practical form of evidence used during product development by edtech companies. New ideas are tested on small samples in short time frames to inform the next stage of design. The results from these tests do not tell us that impact has occurred, but that is not their purpose. Their purpose is to improve each small step of a design process. If these tests are done thoughtfully, involving educators, then they increase the chance that the final design will indeed have impact.

I come last to probably the most common form of evidence (and probably considered the least rigorous) in edtech: teacher feedback.

As with rapid cycle testing, teacher feedback cannot be considered evidence of impact and should not be used as if it is, but it does have two very important uses

The first is to help companies improve their products based on feedback, the other is to help teachers navigate the incredibly confusing array of products on offer.

Edtech, more than most education projects, risks not working because it simply isn’t used or is not used well. Kit stays wrapped in original boxes in cupboards or used only once, teachers don’t receive adequate training, no one knows what to do when the kit goes wrong. Teacher feedback allows us to determine whether that first step towards impact can be taken. Have others in your situation found a way to integrate a particular technology into their teaching in a way that works for them? A ‘yes’ does not guarantee impact but a ‘no’ pretty much rules it out.

What should we expect of an edtech company?

So to return to the question I asked at the beginning of the blog, what should we be expecting from edtech companies who claim to be serious about evidence? We propose five expectations:

  1. Companies should have a clear theory of change that shows an understanding of where their product sits in the complex education ecosystem of schools, students, teachers and parents. Companies should understand what assumptions they are making about how their product creates impact.
  2. Companies should be using all of the available evidence, both from impact evaluations and pedagogical research to design their products and track the quality of what they do.
  3. Where the evidence is weak and there is not yet consensus on which ‘design principles’ lead to impact, companies should be aiming to build more evidence. However, where the benefit of this research is broader than an individual company, public or philanthropic, money should go towards funding these research efforts so that the research produced is high quality and independent.
  4. Companies should be responsive to reviews and rapid cycle tests, involving educators and students in the design process.
  5. Companies should not overclaim. Teacher reviews and rapid cycle testing are good practice and maximise potential for impact but they are not evidence of impact in and of themselves. However, these evaluations can be used as an indicator of where impact is more likely and can help justify further investment in more rigorous research.

From our experience, there are many many edtech companies that aspire to or meet these expectations. They engage enthusiastically with programmes like EDUCATE and submit applications to the Education Endowment Foundation. But more support is needed. Research needs to be made digestible and accessible to companies, just as the Education Endowment Foundation and EDUCATE make research accessible to teachers.

A clear research agenda that shows where the consensus is and where the biggest gaps are in the evidence is needed. The Education Endowment Foundation plans to update its review of the literature in edtech this year, so this is a promising starting point.

The evidence agenda has finally reached edtech. This is a critical time when attitudes and expectations can be shaped. Please get in touch with your ideas and suggestions for supporting an edtech sector that delivers real impact for students based on evidence.

Author

Lucy Heady

Lucy Heady

Lucy Heady

Impact Director

Lucy worked with portfolio companies to measure their impact and assesses the likely impact of potential investments.

View profile
Amy Solder

Amy Solder

Amy Solder

Deputy Chief Central Programmes Officer

Amy was the deputy chief programmes officer at Nesta, supporting the work of its three missions.

View profile