About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Successfully embedding AI in public services will require public oversight

This week we learnt that public officials have been complaining of ‘frustrations and false starts’ after ministers called off half a dozen welfare AI prototypes.

Whilst not every trial and test will lead somewhere, coming only a couple of weeks after the PM announced the UK’s AI Opportunities Action Plan, it shows that the path for integrating AI into our public services is not going to be smooth.

Whilst the appeal is clear in theory - more efficient delivery, cheaper services, better outcomes for citizens - we can’t forget the previous failures to streamline public services with AI, such as the Home Office visa application algorithm or the facial recognition technology used by the Met.

In both these cases, technology was rolled out without proper checks to understand their limitations, without sufficient staff training on responsible use and without proper public consultation to understand the acceptability of using these approaches.

Quality assuring AI

How do we make sure things like that don’t happen again?

By creating the structures and mechanisms to ensure that AI is used safely and responsibly, and trusted by the wider public - known in the AI world as ‘assurance’.

The government’s recent Action Plan and Blueprint for digital government, highlight some important commitments on assurance: a promise to create a ‘Responsible AI Council’, “government-backed high-quality assurance tools” and continued support for the UK’s flagship AI Safety Institute.

But they’re light on the details of how to get there.

In the private sector, a recent DSIT review found signs that the market for robust AI assurance is growing - it estimated that the UK assurance market will exceed £6bn in the next 10 years. (Currently, it's seriously under-performing, both in terms of supply and demand.)

Responsible use of public sector AI tools can’t be an afterthought, we need robust guidance and independent services to help us get there.

But in both the private and public sector there is a lack of incentives to drive robust standards and embed processes that foster responsible AI as part of procurement, especially in the face of a push towards accelerated adoption.

And even when assurance is in place, it focuses on technical performance or regulatory compliance with data security and privacy standards. But it is currently missing a crucial third component - one that is key to the government’s ambition to build and use technology 'in line with public values' - democratic oversight.

Citizen oversight is absent

While there are some rare examples of consultations about the use of specific AI technologies (such as at the Department of Education), most public engagement has focussed on polling the public about general concerns and perceptions without getting into the meat of the limitations of specific tools and how these might impact individual lives.

This has meant that civil servants making decisions about the use of AI have little practical guidance about what risks the public are most concerned about in any given instance and what kind of solutions they would be comfortable with.

We need public voices in AI decision making

Right now the UK is focusing on technical benchmarks for AI assurance. Whilst most AI models now perform at over 70-80% on industry-wide technical benchmarks, technical assurance on AI is not enough - especially if you’re using AI to make decisions about public service.

It’s time to start evaluating them on social and ethical benchmarks through public participation, especially by involving people who are most vulnerable, like older adults, or marginalised, such as people from minority communities. It’s most often these groups that are left out of training data for AI models and so have the greatest chance of having incorrect results when AI powered tools are applied to their cases.

Without the public there will be a lack of incentives to drive the adoption of safety standards and procedures for AI in the sector.

Technology companies themselves are starting to understand the importance of public involvement.

In 2023, Open AI funded several experiments on value alignment to inform the future development of their models, Meta ran public dialogues and Anthropic (bought by Amazon) trained their algorithms to follow a constitution sourced from the US public. But the details of implementation remain opaque with only vague commitments to be incorporating public input and it remains to be seen how these activities will fare when faced with market competition.

Public services can and should do better.

Centre for Collective Intelligence: AI governance

At Nesta, we’ve been thinking about the role of the public in AI design and governance since 2020. Our participatory AI methodology was one of the first examples of how to make AI tools work for a greater number of stakeholders than just developers.

More recently, we’ve developed a collective intelligence approach to AI assurance: we bring together members of the public, especially those who stand to be impacted most, to learn about and discuss the use of AI tools in a given context and give their opinions on acceptable risks and what safeguards should be in place.

At the end of this process, an AI Social Acceptability label is created. This captures the nuance and recommendations of public opinion so people in the public sector can consider this as part of their decision making, alongside the more technical assurance tests.

And we know it works. In November 2024 - we worked with earthquake-affected communities in Antakya Turkey to review an AI tool used by humanitarian organisations to assess damage to their buildings and found that even people didn’t need any technical background to provide meaningful feedback and insight. And contrary to expectations that “communities will say no”, we found that the majority of participants were positive about institutions using the tool to improve future response operations in their area, after weighing up the potential benefits and risks.

Public sector staff aligned with public values

The government won’t be able to deliver on the vision set forth in its Blueprint for modern digital government of acting as a role model for Responsible AI and demonstrating “the use of AI in line with public values to promote equity, fairness, and transparency” without robust methodologies for bringing public voices into the oversight and governance of public sector AI.

Over the next 12 months, we’ll be testing our public-in-the-loop approach to AI assurance, focusing on two different tools – one being used in central government; and a second developed for use by local authorities.

Ultimately, we’re aiming to support public sector staff to feel confident that the AI tools they’re using align with public values, and they understand the public acceptability of any risks they might pose. At the same time, we believe our approach will also build public trust in how AI is being deployed in public services.

If you would like to know more or get involved in this work, please contact us at the Centre for Collective Intelligence.

Author

Aleks Berditchevskaia

Aleks Berditchevskaia

Aleks Berditchevskaia

Principal Researcher, Centre for Collective Intelligence Design

Aleks Berditchevskaia is the Principal Researcher at Nesta’s Centre for Collective Intelligence Design.

View profile