About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Better intelligence about artificial intelligence

Why we need better intelligence about artificial intelligence

In a recent Nature paper, researchers from Stanford University use machine learning to improve the productivity of experiments with battery design by a factor of thirty. Artificial intelligence (AI) techniques like this could dramatically speed up innovation in energy storage, lowering barriers to the adoption of electric vehicles and accelerating decarbonisation. The method developed in the paper could also be used in other areas of science such as drug discovery or materials science.

Similar breakthroughs in computer vision, translation, game playing or robotics suggest that AI systems able to behave appropriately in an increasing number of situations could be the latest example of a General Purpose Technology that will transform productivity and revolutionise scientific R&D, helping tackle some of humanity’s greatest challenges around health and the environment.

At the same time, powerful AI systems create important technical, economic and ethical risks.

  1. AI researchers such as Gary Marcus or Judea Pearl argue that the dominant paradigm of AI research based on deep artificial neural networks produces systems that lack the common-sense, causal understanding of the world required to avoid making dangerous mistakes. Stuart Russell contends that AI systems built for optimisation are inherently unsafe because they will often optimise the wrong objective function. A recent working paper by Iason Gabriel suggests that certain AI techniques could be inconsistent with democratic processes and institutions.
  2. A growing number of economists including Daron Acemoglu, Pascual Restrepo, Anton Korinek or Joseph Siglitz point at market failures in AI. These range from externalities in labour-displacing and dual-use AI technologies to information asymmetries between AI adopters and consumers that could result in privacy infringements and user manipulation. Economies of scale in AI deployment might cement monopolies and lock our economy into those AI technologies that are most competitive today regardless of their long-term value.
  3. Critical scholars of AI such as Joy Boulamwini and Virginia Eubanks, and researchers at think-tanks like the AI Now Institute and Data and Society have shown that AI systems trained on biased data could discriminate against minorities and entrench existing inequalities, linking these outcomes to lack of inclusion in the AI research workforce.

These critiques imply that AI systems can be built using a diversity of approaches and that selecting between them involves technical, economic and ethical trade-offs (e.g. between high performing, opaque deep learning algorithms and less accurate, easier to interpret statistical methods). We cannot assume that markets will, on their own, resolve these trade-offs to maximise societal welfare. Like other technologies, AI systems are socially constructed, and there is a risk that the groups who participate in their development and deployment could ignore the needs and values of those who are excluded, or exposing them to unfair AI risks. Ultimately, these critiques provide a rationale for public interventions to steer AI in more societally desirable and inclusive directions.

Governments, businesses and NGOs across the world are responding to this demand through national strategies, ethical charters, declarations of principles, and networks aimed at realising the value of AI while managing some of its risks - Nesta’s Map of AI Governance highlights many of these initiatives and their rapid growth in recent years.

In the UK, the Government launched an AI Sector Deal in 2019 with the goal of realising the economic and social potential of AI. This includes initiatives to increase the supply of AI talent and the creation of a Centre for Data Ethics and Innovation to “ensure safe, ethical and innovative uses of AI”. AI is also one of the Grand Challenge Areas to “put the UK at the forefront of the industries of the future”, with a current focus on the use of AI techniques to transform the prevention, diagnosis and treatment of chronic diseases by 2030 - this illustrates how governments are trying to encourage the deployment of AI to tackle big social challenges. The European Commission recently published a White Paper on AI that seeks to enable scientific and economic AI breakthroughs while maintaining trust and respecting rights.

These ambitious policy agendas face formidable informational challenges: how can policymakers choose what rapidly evolving AI technologies and applications to support and how do they intervene in their development without distorting resource allocation and avoiding capture by vested interests? This question requires an urgent answer: according to Collingridge’s dilemma, by the time there is enough information about a technology to intervene in its development it is generally too late to do anything. In the face of these difficulties, some think-tanks have called for an approach to policy that allows permissionless innovation by the private sector, or even outsourcing policy activities such as AI regulation to private sector companies with better information and incentives.

A no-regret activity in this context would be to improve the amount and quality of the evidence available about the state and evolution of AI in order to make sure that whatever policies are put in place are well informed, and to monitor closely those AI markets where/if policymakers decide to adopt a more laissez faire approach. By removing uncertainty, this evidence can also help the private sector make better decisions. In the rest of this blog, I highlight some areas of AI that would benefit from more evidence and indicators, and summarise Nesta’s programme of work to build better intelligence about AI.

Counting is not enough

Simple measures of AI activity, such as for example counts of AI research papers and patents and aggregate statistics about talent supply and the level of investment on AI startups, while useful, hide important information required to inform a policy agenda for inclusive, societally beneficial AI. For example, they tell us little about:

  1. The nature of the AI technologies that are being developed: is research funding concentrated on the dominant deep learning paradigm, or encouraging alternatives to develop AI systems which are more explainable and robust, and less data-hungry? Can we start to operationalise various dimensions of AI performance in order to get an empirical handle on trade-offs between different AI systems? How much research is taking place in controversial application areas such as face recognition or lethal autonomous weapons? Can we say anything about the purpose of the AI systems being developed, such as for example whether they are focusing on automating jobs or augmenting them?
  2. Participation by different groups in AI R&D: As mentioned above, there are concerns that a homogeneous AI R&D workforce might develop AI systems that discriminate against minorities and less affluent and vulnerable groups. What are the levels of gender, ethnic and socio-economic diversity in AI R&D, and how it is linked to the development of inclusive AI systems? How many lost Lovelaces is the AI workforce missing because of its lack of inclusion?
  3. Sectoral differences in AI R&D and diffusion: Technology companies appear to be leading in the development of state-of-the-art AI techniques trained with big proprietary datasets and computational infrastructures. What are the levels of corporate participation in AI R&D in different innovation systems and what is its impact? Are rapid improvements in the techniques that these companies favour and a ‘brain drain’ from academia to industry reducing the space and resources for “public interest AI research” that might be more relevant for other sectors?
  4. The subnational geography of AI R&D: In the last three decades we have seen increasing economic disparities between fast-growing, innovative ‘supercities’ and so-called left-behind places. A growing body of literature suggests that this has contributed to the growth of populism and political polarisation. Will the arrival of AI exacerbate these problems by further concentrating economic activity and innovation in a small number of places while disadvantaging communities everywhere else, thus fuelling new political conflict between those regions that automate and those that are automated?
  5. AI geopolitics: We are currently witnessing what appears to be a race between “AI superpowers” with different visions for the future of AI and for the societies that adopt it. Are the political values and priorities of different countries embedded in their AI R&D agendas, and how are these shaping global AI research trajectories? To which extent is this perception of a race impacting on collaboration between AI research communities in different countries?

Creating these indicators requires new data sources that capture AI R&D activity wherever it happens, from open research and open source repositories that document advances in AI R&D close to real-time to online job ads that tell us about how businesses in different sectors are building-up their AI capabilities.

Data science methods are also needed to collect, enrich and process this data. For example, natural language processing (NLP) methods can help to identify activities related to AI in research papers, patent and grant abstracts, and topic modelling can generate measures of the thematic composition of the field, distinguishing between various techniques and application domains. We can use network science to measure the novelty of research and innovation activities based on the concepts that they combine. In addition to generating more relevant and inclusive indicators of AI R&D, this rich data can be published in interactive formats such as data visualisations and dashboards where users can explore their own questions.

These are precisely the methods that Nesta’s innovation mapping team specialises on.

What we have done so far

In the last two years, we have published several papers, reports and tools to develop indicators and maps along the lines above. I summarise them in turn:

  1. Gender diversity in AI research: In this paper, we study the evolution of gender diversity in AI research in arXiv, a preprint website where AI researchers disseminate their findings. Our analysis shows that in relative terms, the share of female authors in AI, and the share of AI papers with at least one female author are very low (12% and 25% respectively), and that these percentages have barely improved since the 1990s. We also find that gender diversity is particularly low in machine learning and data related sub-disciplines. Our analysis suggests that AI papers involving at least one female author tend to focus on more applied and socially focused topics than those without female-authors, even after we control for the field, year, and country highlighting the potential benefits - in terms of more impactful research - of making the AI workforce more inclusive.
  2. Mapping (AI) Missions: Here, we analyse UK research funding data to generate indicators about the levels of activity in the intersection of AI and chronic diseases, a Grand Challenge Mission area set up the UK Government. We show that chronic diseases are underrepresented in AI research, and that it tends to be dominated by computer science topics, although the situation appears to have changed in recent years as medical sciences and biotechnology funders start supporting projects that apply AI methods in their disciplines. This analysis highlights how we can use data science methods to generate indicators to inform mission-oriented policies to encourage the deployment of AI systems to tackle important social challenges.
  3. A Semantic analysis of AI research: Here we use topic modelling to analyse the composition of AI research and how it has evolved over time. We show how the field has been transformed by the deep learning revolution since the 2010s. Our analysis also demonstrates that aggregate indicators of AI activity (the simple counts we referred to before) can mask important policy-relevant patterns: for example, the superiority of China over the EU in AI research is understated when we include in our analysis papers using symbolic and statistical techniques far from the frontier of AI research.
  4. Deep Learning Deep Change: In this paper, we monitor the adoption of deep learning methods in different sub-disciplines of computer science, confirming the idea that deep learning is becoming a new “method in the methods of invention” with the attributes of a General Purpose Technology. We then study the global geography of deep learning research, evidencing the ascent of China, the relative decline of EU countries, and the increasing concentration of deep learning research activity in a small number of regions. We also use data from CrunchBase, a startup directory, to measure the presence of industrial capabilities that might be relevant for AI in different regions, showing that those regions with co-location of AI research and related business activity are more likely to develop a strong AI R&D cluster. These results underscore the idea that investing on a region’s AI research base when it lacks complementary business capabilities may not be sufficient to develop a sustainable competitive advantage in the sector.
  5. arXlive. We have also launched arXlive, an interactive tool to explore the enriched datasets developed through the analyses above. arXlive contains HierarXy, a research search engine, and DeepChange, a real-time version of our Deep Learning Deep Change paper.

In order to build trust around the experimental data sources and indicators we use, we have made our data and code available in several projects. For example, all the code and data for the Deep Learning Deep Change paper is available here, as well as the code for arXlive. Our data has already been used in Stanford HAI’s AI Index, and in a map of global AI research developed by the World Economic Forum.

Where to next?

We are currently working in new analyses of AI in the UK Creative Industries and of research in the intersection between AI and Collective Intelligence in collaboration with Nesta's Centre for Collective Intelligence Design. We are also further analysing the arXiv data with the goal of measuring private sector participation in AI research and its link with the evolution of thematic diversity in AI research, studying regional concentration of AI research globally, and monitoring global trends in surveillance-enabling AI technologies such as facial recognition and pedestrian identification. We will be publishing our results in the coming months, together with the code and data that are required to review and reproduce this work.

Ultimately, we believe that AI measurement efforts would benefit from more coordination, mission-orientation, standardisation and automation.

Coordination would help researchers respond to policy priorities faster than is possible through the decentralised, bottom-up processes that often guide the behaviour of scholarly communities. Mission-orientation, that is, a clear focus on the evidence needs of policymakers and practitioners would encourage the kind of interdisciplinary collaborations required to shed light on AI dynamics in the intersection of science, technology, economics and society, while also advancing theory. Standardisation, for example around how AI is defined and how these definitions are operationalised would make it possible to compare and triangulate different studies more easily than is possible today. Automation, for example through the development of an open source infrastructure to collect, enrich and analyse data about AI would improve efficiency, enhance reproducibility and make it possible to quickly combine data and methods to triangulate results and explore new questions (we called for something similar in the field of the Science of Science last year). This infrastructure could also be used to automate the creation of open intelligence about AI, making it possible to monitor the evolution of the field closer to real-time.

Two interesting models for this effort are the Research on Research Institute (RoRI), an international consortium of research funders, academics and technologists working to champion transformative and translational research on research, and the Policy and Evidence Centre, a Nesta-led institute bringing together academics from different disciplines to improve the evidence base for creative industries policy in the UK. Perhaps we need similar initiatives, but focused on AI.

We believe that only concerted, interdisciplinary measurement and analysis efforts along these lines will give us a shared understanding of the state of AI and its evolution to ensure the effectiveness of policies to steer AI in a direction where its value is realised for the benefit of more people.

Get in touch here if you want to find out more or collaborate with us to advance this agenda.

Author

Juan Mateos-Garcia

Juan Mateos-Garcia

Juan Mateos-Garcia

Director of Data Analytics Practice

Juan Mateos-Garcia was the Director of Data Analytics at Nesta.

View profile