About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Can AI drive innovation in qualitative research? Here’s what we’ve learnt.

As an innovation driven organisation, we’re always exploring how new technologies can give people new and better ways to solve problems.

One of the key aspects of our work is understanding of how people are experiencing the world - whether that’s the issues that motivate our missions or the wider context in which people live, for example AI’s potential impact on citizen science.

Qualitative research lies at the heart of this by helping us to understand people’s real experiences, views, priorities and perception. This understanding is crucial if we’re going to help design effective, people-centred solutions that can make a real impact.

Nesta and BIT have been experimenting with AI-powered interview platforms and AI-led chatbot interviewers to explore their potential to scale qualitative research like never before.

Although we were initially trying out these platforms for internal purposes, our experiments sparked a bigger conversation about the real world implications of using AI-powered tools to gather insights at scale and what it could mean for the future of qualitative research, specifically around data collection.

AI-powered tools give us the potential to understand more people's experiences

One of the most exciting features of AI-powered interviews is the ability to hear from a larger and more diverse range of people in a short amount of time.

In a conventional, human-led qualitative study, speaking to hundreds of people could take months. But with AI-powered interview chatbots, we were able to conduct almost 200 interviews within weeks - a sample size and time frame that’s previously been reserved for quantitative surveys.

For ourselves, and many other organisations, this kind of speed and scale could have a transformative effect. We can test assumptions and gather real-world insights quickly, making it easier to adapt and respond in real time.

This agility is key when you're working toward ambitious, evolving and complex goals - in our case accelerating home decarbonisation, cutting obesity and narrowing the gap between disadvantaged children and their peers.

Engaging with people on their own terms

AI-powered chatbot interviewers also have the potential to help us reach groups of people that may be missed by more common research methods, such as people who are time-poor or less familiar with research processes.

For Nesta and BIT, this could mean reaching out at scale to more people on the frontline such as teaching assistants, supermarket workers or heat engineers - people who might not otherwise be able to spare an hour to engage with us during a busy working day.

Although qualitative research best practice aims to meet people where they are, AI-driven interviews have the potential to allow busy participants to engage with research at their own convenience. Rather than having to fit in with a scheduled interview for example, they can chat to an AI-powered interviewer through text during their commute time on the bus.

Some participants in our AI-led interviews reported feeling as comfortable, or even more so, than they would in conventional research interviews or surveys. They felt less pressure and judgement when interviewed by a chatbot and appreciated the overall pace of the AI-powered interviews, not finding a noticeable difference from communicating with a human researcher.

This technology can also allow UK researchers to reach non-English speaking populations more effectively, as these AI-powered platforms have the capacity to switch language from the moment a participant starts speaking, while still following the interview topic guide as planned.

Does AI-led qualitative research risk shallow insights and deeper digital divides?

Despite these benefits, the use of AI-powered interviews is not without limitations, and these need to be carefully understood and managed if we want this technology to be an effective tool for us all to draw on.

One significant concern is sacrificing the quality of the insights, for speed and volume. With AI-powered qualitative research, we risk prioritising scale over depth, making the assumption that more data leads to better insights.

One of the biggest strengths of qualitative research is depth - capturing the nuanced, lived experiences of people directly impacted by policies and services.

While AI-powered interviews can capture a large volume of responses quickly, it misses the subtle, contextual richness that comes from skilled human interviewers using follow up prompts to unearth complex responses and reading body language and other non-verbal cues.

In this sense, AI-powered interviews are more akin to open-ended questions from traditional surveys, rather than 1-2-1 interviews, which limits the ability to make generalisations or causations - unlike survey questions which are uniform and rigorously tested.

A second concern is that scaling up qualitative interviews doesn't necessarily lead to more meaningful findings.

When we tried out a couple of AI-interviewers, we found that just one human-led interview could surface as many, if not more, valuable insights as dozens of AI-led interviews.

Sometimes the quality of AI-led interviews fell short - for example, many interview answers were only a few words long and there was a lack of follow-up prompts from the chatbot. This difference in quality might partly be down to time; the average length of AI-led interviews were around 15 minutes, whereas the average length of human-led interviews can range from about 30 minutes up to an hour.

This suggests that the richness of the data is often tied to the quality of interaction, not just the quantity of responses.

The digital divide is also another important issue.

While AI-powered tech comes with a promise to reach more people, it can also exclude those who are not tech-savvy, are nervous about AI technology, or lack access to the necessary technology. This is particularly problematic if we're trying to understand populations that are already marginalised.

AI-powered tools could help us scale - but only if we’re careful not to leave anyone behind. Criticism of AI-powered tools are usually targeted at this risk, with, for example, the biases that AI-tools exhibit due to being trained mainly on internet data (where a higher proportion of users are educated, financially better off and are English speaking).

Talking to the test

One of the downsides of not having a human in front of them was that in some cases, participants seemed to treat the interview like a test of their knowledge - searching for answers via generative AI tools like ChatGPT or search engines, rather than providing their own understanding of the question.

Participants becoming too focused on giving “correct” answers, naturally causes problems for any research aiming for a deep understanding of personal experiences or behaviours rather than factual knowledge.

For example, if we ask participants about their knowledge of a government scheme and they look up the answer, we risk overestimating how well that scheme is understood by the public, and not making pertinent recommendations.

The design of AI-led interviews will therefore need to be thought through carefully so as not to encourage such a response.

AI-powered interviewers can augment human research but can’t replace it

AI-enabled tech has potential to help us scale our research, but it shouldn't come at the expense of quality, inclusivity, or ethics. If there’s one learning to take-away from our experiment, it is that, for now, the only way to mitigate the risks inherent to deploying AI-powered tools in a qualitative capacity is to ensure researchers are regularly and meaningfully involved in the design, testing and launch of any AI-led interviews. This is sometimes referred to as the ‘human in the loop’ - acting as a critical friend to check and challenge what goes in and comes out of the chatbot.

AI is a useful tool to help researchers amplify and augment their processes but it isn’t a substitute for a human-led qualitative study.

As Nesta and others continue to experiment with this technology, it will be critical to strike the right balance between breadth and depth, between scale and nuance, and between innovation and accessibility. Only by ensuring that there is a rigorous focus on the quality of the data being collected can researchers ensure that AI-led interviews truly help us along the path of innovation, achieving ambitious moonshot goals without leaving anyone behind.

Author

David Bleines

David Bleines

David Bleines

Senior Researcher, Responsible Research Unit

David joins Nesta as a senior researcher within the Responsible Research Unit.

View profile
Camille Stengel

Camille Stengel

Camille Stengel

Head of Qualitative Research and Quality Assurance, Responsible Research Unit

Camille is the Head of Qualitative Research and Quality Assurance.

View profile
Natalie Lai

Natalie Lai

Natalie Lai

Senior Analyst, Discovery Hub

Natalie is a senior analyst in Nesta’s Discovery Hub.

View profile