About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Participatory AI for humanitarian innovation: a briefing paper

This working paper outlines the current approaches to the participatory design of AI systems, and explores how these approaches may be adapted to a humanitarian setting to design new ‘Collective Crisis Intelligence’ (CCI) solutions. We define collective crisis intelligence as the combination of methods that gather intelligence from affected communities and frontline responders with artificial intelligence (AI) for more effective crisis mitigation, response or recovery.

This paper accompanies our report on Collective Crisis Intelligence, an emerging innovation approach being used to improve anticipation, management and response in the humanitarian sector: Collective crisis intelligence for frontline humanitarian response.

Participatory Artificial Intelligence or Participatory Machine Learning in their broadest sense refer to the involvement of a wider range of stakeholders than just technology developers in the creation of an AI system, model, tool or application.

‘Participatory AI for humanitarian innovation: a briefing paper’ responds to the growing interest in using participatory approaches for the design, development and evaluation of AI systems across industry, academia and the public sector.

Based on a rapid analysis of existing Participatory AI case studies and the academic literature, this paper proposes a new framework for participatory approaches to inform the design, development and implementation of AI solutions. The framework outlines participatory interventions that could be used at different stages of the AI design and development pipelines.

The methods involved and depth of engagement can be split between four categories:

  • Consultation: refers to participation where input occurs outside of the core AI development process and it is not guaranteed that it will impact the design of AI. Common methods include polling and deliberation.
  • Contribution: refers to participation that is time-limited to one stage of the AI development pipeline. Here, external stakeholders complete one of the tasks that is necessary to AI development e.g. data collection, data labelling, validation of model outputs. Common methods include both targeted and open crowdsourcing.
  • Collaboration: refers to participatory practices with multiple touchpoints along the AI development pipeline and/or where external stakeholders are able to meaningfully contribute to interrogating the model and shaping the features that it uses to make predictions or classifications, even if they were not involved in problem setting.
  • Co-design: is the most comprehensive form of stakeholder involvement in AI design and development. It involves engagement at multiple stages throughout the pipeline and outside it. All of the stakeholder groups involved discussing their needs, values and priorities, both with respect to the problem space and the technology.

We illustrate the framework using three in-depth case studies and suggest five key design questions to help others design participatory AI projects:

  1. Who defines the process and what success looks like?
  2. Whose participation is required?
  3. What is the intent behind participation?
  4. How will participants be rewarded?
  5. What is the process for closing the project?

Applying Participatory AI approaches in humanitarian settings

In the final section of the report, we explore the relevance of Participatory AI to the humanitarian sector. Drawing on the Core Humanitarian Standard, we highlight how AI systems may jeopardise humanitarian principles or the rights of crisis-affected communities, and give examples of participatory approaches that could help to address some of the risks.

Although participatory design alone will not be enough to address all of the critiques of AI in humanitarian settings when developed alongside other complementary measures it can help to strengthen the ecosystem for responsible AI.

This briefing paper will inform our approach for testing and evaluating participatory AI in humanitarian contexts as part of a larger project on CCI for humanitarian action that Nesta is delivering in partnership with the IFRC Solferino Academy. The project is funded by a grant from the UK Humanitarian Innovation Hub (UKHIH), which is funded by the UK’s Foreign, Commonwealth and Development Office (FCDO) and hosted by Elrha - a global humanitarian organisation and the UK’s leading independent supporter of humanitarian innovation and research

Authors

Aleks Berditchevskaia

Aleks Berditchevskaia

Aleks Berditchevskaia

Principal Researcher, Centre for Collective Intelligence Design

Aleks Berditchevskaia is the Principal Researcher at Nesta’s Centre for Collective Intelligence Design.

View profile
Eirini Malliaraki

Eirini Malliaraki

Eirini Malliaraki

Systems Architect and Designer, Nesta's Centre for Collective Intelligence Design.

Eirini is a Systems Architect and Designer at Nesta's Centre for Collective Intelligence Design

View profile
Kathy Peach

Kathy Peach

Kathy Peach

Director of the Centre for Collective Intelligence Design

The Centre for Collective Intelligence Design explores how human and machine intelligence can be combined to develop innovative solutions to social challenges

View profile