About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

How to involve the public in the development of artificial intelligence

This year algorithms discovered a planet, taught themselves to play chess and wrote a pretty bizarre Harry Potter novel. But they also allowed fake news to influence the US election and enabled advanced mass surveillance in China.

Artificial intelligence (AI), not the type that will see super intelligent robots destroy humanity, but the type that might deny you a credit card or develop a scarily accurate picture of your life, is already starting to shape how we live. This will only continue as governments start to invest huge sums in the field.

Where are the voices of ordinary people in the conversation about how AI should develop? Here are four reasons why the government should engage the public in this conversation, and three suggestions of how to do it:

Public engagement could improve AI research: AI researchers are overwhelming male and are also likely to be from predominantly wealthy backgrounds. Theories of collective intelligence and cognitive diversity show that more diverse groups are better at solving problems. This lack of diversity also means that AI researchers often focus on solving the problems of people like them. Artificial Intelligence holds many promises, but its exclusivity may be holding it back.

It could focus AI research on the most pressing challenges: Self-driving cars are designed to be autonomous, perhaps because they were born out of the military’s need for vehicles that can operate in hostile environments. But a civilian transport system may work better if self-driving cars were seen as one part of an intelligent and connected system of roads, private vehicles and public transport. What if research was focussed on addressing what the public thinks are the most pressing challenges rather than the needs of investors, industry or the military? Could this lead to AI research that wasn't focused on either the trivial, such as serving up better Netflix recommendations, or the lethal, such as autonomous weapons systems?

It could make AI more ethical: When the public get a chance to tell experts what they think about science and innovation funding - in RCUK’s Public Dialogues or the Eurobarometer survey, for example, they express strong views about the need for research to focus on pressing societal needs and for regulation to prevent negative effects of innovation, for example job losses and a loss of privacy. Public involvement in the development of AI could play an important part in guarding against negative outcomes.

It’s the right thing to do: All of the preceding arguments for public involvement are instrumental - involving the public will improve the quality of AI research and target it at the most pressing problems. But Artificial Intelligence is going to have wide-ranging effects on the lives of everyone and yet only a tiny group of people are making important decisions about its development. Beyond any instrumental value that public engagement may have, there is a social justice argument for involving a much wider group of people in debating and designing Artificial Intelligence.

How can Government involve the public in AI research?

“It’s not rocket science: listen, and those who feel ignored will re-engage passionately.”*

The subtle art and science of citizen engagement is highly developed. But ultimately, it's about listening to what the public has to say, giving policymakers a chance to hear viewpoints outside of the AI echo chamber of researchers, policymakers and businesses. Here are three simple ideas of how to do it:

Send the UK’s digital minister on a nationwide tour

The UK’s digital minister should follow the example of Andy Haldane, chief economist at the Bank of England, and go on a nationwide tour to feed the everyday experiences of ordinary households into policy making on artificial intelligence. The minister could take others with him on his tour: civil servants, researchers and business advocates, to learn about the concerns, fears and also positive visions of citizens around the country.

Organise a nationwide series of debates on the development of AI

The Royal Society’s research into public attitudes to machine learning is an important piece of work and the RSA's forthcoming citizens juries on the use of AI in criminal justice could also be useful, but how can the Government engage the public beyond focus groups?

Firstly, get creative. Terms such as Neural nets and backpropagation don’t mean anything to the public. Debates should be built around things that people can interact with: videos, games and physical installations that explore different scenarios. For example: “How would you feel if all your search data from the last five years was hacked?”

Secondly, why would the public want to take part in a series of debates on AI? As we have argued before, citizen engagement exercises need a clearly defined goal. Two potential goals for these debates include: one, to develop a set of public principles for AI research to guide its development along socially acceptable lines. This could look something like the eight principles that RCUK developed based on a review of its public engagement exercises; and two, to develop a set of public challenges for AI research. What pressing issues does the public think AI research should be directed at? This could be used to inform the Government’s AI industrial strategy challenge fund.

Appoint ‘public champions’ to each commission, expert group and inquiry

Experts, policymakers and business interests are well represented on commissions designed to explore the ethics of AI. Citizens might be consulted, but they aren’t represented. The Government and others who are setting up AI commissions should appoint a ‘public champion’ to ensure that the views of the public are represented. This person may be an expert in science communication, someone skilled in closing the distance between the public and the science and innovation establishment.

Public involvement in debating the principles behind the development of AI and setting the challenges that AI research is directed at - this is an agenda for an AI policy that could truly solve the grand challenges of our era.

* See Poverty Safari: A Scottish Rapper's stirring polemic about deprivation

Author

Tom Saunders

Tom Saunders

Tom Saunders

Principal Researcher

Tom was a Principal Researcher in the inclusive innovation team.

View profile