About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Using artificial intelligence for social impact

There’s a flurry of excitement about modern developments in artificial intelligence (AI).

The arrival of powerful image generators, AI agents able to perform multiple tasks or seemingly (to some) sentient chatbots are an exciting prospect for data scientists that use machine learning to tackle big societal challenges in areas such as health, education and the environment.

One of the unofficial remits of AI is to “solve intelligence and then solve everything else”. We have to assume that “solving” would include reducing inequalities in education, tackling obesity and decarbonising our homes. Are we about to get AI systems that could help us solve these problems?

Industrialised AI

The dominant model for AI is an industrial one. It trains deep, artificial networks on large volumes of web and social media data. These networks learn predictive patterns and can be useful for perception jobs such as identifying a face in a photo. They are good for tasks that don’t have human input or where we are not interested in understanding why someone made a choice, such as liking a social media post.

The large technology companies developing these systems use them to predict relevant search engine results, what social media content is most engaging and to make recommendations that could result in a purchase. This helps these companies build more engaging and profitable websites and apps.

But when it comes to social impact sectors, data is scarce, explanation is more important than prediction and making mistakes could cost lives.

An absence of data in the social sector

Our societies face big challenges: a widening outcome gap between the poorest children and the rest; growing obesity rates; the need to urgently reduce household emissions. AI systems such as those described above could help tackle these challenges by mapping the problems and targeting solutions. Unfortunately, this is easier said than done.

Search engines and social networking sites generate vast amounts of standardised data that is highly predictive of relevant outcomes. By contrast, social impact sectors such as education and health comprise hundreds or thousands of organisations (local authorities, hospitals or schools) each collecting small, incomplete and disconnected datasets. Social media or search engine data that could be relevant for improving outcomes in health (the food adverts different groups are exposed to, for example) or education (which social network structures increase community resilience and social mobility) are expensive or impossible to access. Even if it were possible, it’s likely they would provide a biased view of the situation, excluding or underrepresenting some vulnerable groups.

Of course data gaps could be overcome by investing in data collection, standardisation and integration. But even if we did this, training massive deep learning models on these data and using them to make predictions at scale might not be a good idea. Here’s why.

Model risks

There are important mismatches between the outputs generated by industrial AI models and the context of social impact sectors. Trying to apply one to the other is like taking a self-driving car trained in the suburbs of Arizona into the dense and busy streets in the centre of a European city. There will be accidents.

"Trying to apply industrial AI to the context of social impact sectors is like taking a self-driving car trained in the suburbs of Arizona into the dense and busy streets in the centre of a European city. There will be accidents."

First, industrial AI models lack robustness: they often fail to generalise to situations outside of their training set. This is a problem in sectors such as health whose composition and behaviour change over time, degrading the quality of an AI system’s predictions, perhaps drastically. We saw this at the beginning of the Covid-19 pandemic, when the sudden economic and social shock caused by the pandemic disrupted many machine learning and analytics models trained on historical data.

Second, deep learning models are effective at generating predictions, but not explanations. They infer patterns from data which are correlational rather than causal. Unfortunately, without explanations for a decision, people working in public services become disempowered and can lose their agency and ability to understand or challenge high-stakes decisions. It also becomes harder to interpret the outputs of the model in order to determine that it is working.

Ethical risks

We know that some AI systems can entrench discrimination in the organisations that adopt them, for example by predicting that groups that have been historically discriminated against are more likely to commit crimes.

Even systems designed to generate fair predictions can still generate unjust outcomes. For example, they might prioritise helping an exploitative organisation to reduce costs at the expense of service quality. An example of this is that predictive AI systems might favour “targeting interventions” that directly achieve an outcome (eg, denying bail to individuals who are predicted to be at high risk of reoffending) instead of deeper interventions that might be possible with causal knowledge about the drivers of that outcome (eg, support interventions that mitigate the risk of reoffending in the first place).

Political risks

The complexity of social challenges makes them difficult to tackle. There is disagreement about the goals we want to achieve and the methods for achieving them. This issue is often seen in the debate about the role of technological versus behavioural change when it comes to reaching sustainability goals. Any technical system that skews debates towards particular types of values or interventions should be approached carefully. There is evidence that Industrial AI systems tend to prioritise certain values such as efficiency at the expense of others such as transparency or participation. When deployed, they could embed these values into organisational infrastructure and processes that could be hard to change down the line. They could also centralise decision-making and reduce the scope for democratic debate, accountability and experimentation.

Together, these factors make industrialised AI systems less suitable for sectors where there is a diversity of values and a variety of changing factors.

Craft AI

All of this is not to say that data science and machine learning have nothing to contribute to social impact sectors. They can be very valuable – if they are deployed following a different approach that takes the sector’s context into account. We call it craft AI and it has the following features.

These ideas are inspired by the work of a wide range of communities including data scientists for social impact, researchers in areas such as causal and participatory machine learning, critical studies of AI, responsible and inclusive innovation and data justice. Many of these ideas have been put forward in response to the ethical risks raised by industrialised AI models and as an opportunity to overcome technical barriers that might prevent us from developing truly general-purpose and trustworthy AI systems.

Craft AI is likely to be slower, more complex, localised and less scalable than industrialised AI. It also requires more human involvement. Perhaps it makes sense to think about it as an instance of intelligence augmentation (IA) where we use machines to boost our capabilities instead of automating them, not shying away from the responsibility to thoughtfully and carefully tackle the biggest challenges of our time.

Author

Juan Mateos-Garcia

Juan Mateos-Garcia

Juan Mateos-Garcia

Director of Data Analytics Practice

Juan Mateos-Garcia was the Director of Data Analytics at Nesta.

View profile
George Richardson

George Richardson

George Richardson

Head of Data Science, Data Analytics Practice

George is Head of Data Science in Nesta’s Data Analytics Practice.

View profile