Public agencies in the UK are increasingly encouraged to take advantage of AI to improve public services, while acting as a role model for using AI responsibly and meeting public expectations around equity, fairness and transparency.
Despite this recognition that public trust is fundamental to the successful adoption and use of AI in the public sector, evidence from public polling suggests that there’s a long way to go. The public is pessimistic about AI’s impact on society particularly in developed markets.
In this project, we will prototype and test a novel process for AI assurance – involving the public in assessing the social acceptability of the risks posed by specific AI tools. This fills a critical gap in the current landscape of services being developed to test and evaluate AI systems.
Taking a deliberative polling approach, we’ll ask people to weigh up the benefits and risks of specific AI tools and provide guidance on what uses are socially acceptable. Using these insights, we’ll create an easy-to-understand Social Readiness Advisory Label that can be used by public sector staff as they navigate decisions about AI.
We want to involve members of the public in decision-making about how AI is used in the public sector and to support public sector staff to be confident that they are using AI responsibly with a public mandate.
Ultimately, we hope this approach will help to build public trust in how AI is being used in public services and move us closer to a future where AI works in the public interest.
We think that AI tools should be developed with input from a diverse range of stakeholders. This will help us ensure they are more accurate and appropriate for the task at hand, and that we avoid the worst harms as much as possible. We call this approach participatory AI.
Our previous work on participatory AI focussed on applying collective intelligence methods at the early stages of the AI development pipeline. For example, crowdsourcing more diverse datasets for training AI models and using participatory design with frontline users to decide which problems AI tools should focus on.
With the emergence of foundation models, also called general-purpose AI, it’s become more difficult to influence these earlier phases of AI technology development. A handful of companies are developing the AI models that the majority of other AI tools are built on. Our people-centred approach to AI governance will ensure there’s a way to bring diverse voices to the table during the deployment of AI systems.
This is particularly important in a public sector context, where AI tools that are inaccurate, biased or simply not used as intended have the potential to cause significant harm.
We hope our methodology can help the UK public sector harness the potential of AI tools to improve the efficiency and quality of public services while ensuring this is in line with public values and builds trust, transparency and accountability.
Over the next 12 months, we’ll be creating a proof-of-concept for the Social Readiness Advisory Label. We’ll design and test a deliberative polling approach to understand the social acceptability of risks that could result from deploying AI tools in the UK public sector context.
We’ll run this process twice, focusing on two different AI tools – one developed for use by civil servants in the central government and a second developed for use by local authorities.
We’ll work with public sector staff to understand how they currently make decisions about AI procurement, deployment and risk management, and how a people-centred approach could best support them in making these decisions. We’ll also test the long-term viability and feasibility of this approach, ensuring that we align our process to the realities of the fast-moving regulatory landscape and the priorities set out in the UK government’s AI Opportunities Action Plan and the Blueprint for modern digital government.
This project will enable us to refine and improve the process and build an initial evidence base for the approach.
Get in touch if you would like to know more about this project, or have an AI tool being deployed in UK public services you would like to test through our ‘social assurance’ process.
This project is funded through a grant from the Future of Life Institute. Information about our partners and advisors will be added at a later date.