In a recent Nature paper, researchers from Stanford University use machine learning to improve the productivity of experiments with battery design by a factor of thirty. Artificial intelligence (AI) techniques like this could dramatically speed up innovation in energy storage, lowering barriers to the adoption of electric vehicles and accelerating decarbonisation. The method developed in the paper could also be used in other areas of science such as drug discovery or materials science.
Similar breakthroughs in computer vision, translation, game playing or robotics suggest that AI systems able to behave appropriately in an increasing number of situations could be the latest example of a General Purpose Technology that will transform productivity and revolutionise scientific R&D, helping tackle some of humanity’s greatest challenges around health and the environment.
At the same time, powerful AI systems create important technical, economic and ethical risks.
These critiques imply that AI systems can be built using a diversity of approaches and that selecting between them involves technical, economic and ethical trade-offs (e.g. between high performing, opaque deep learning algorithms and less accurate, easier to interpret statistical methods). We cannot assume that markets will, on their own, resolve these trade-offs to maximise societal welfare. Like other technologies, AI systems are socially constructed, and there is a risk that the groups who participate in their development and deployment could ignore the needs and values of those who are excluded, or exposing them to unfair AI risks. Ultimately, these critiques provide a rationale for public interventions to steer AI in more societally desirable and inclusive directions.
Governments, businesses and NGOs across the world are responding to this demand through national strategies, ethical charters, declarations of principles, and networks aimed at realising the value of AI while managing some of its risks - Nesta’s Map of AI Governance highlights many of these initiatives and their rapid growth in recent years.
In the UK, the Government launched an AI Sector Deal in 2019 with the goal of realising the economic and social potential of AI. This includes initiatives to increase the supply of AI talent and the creation of a Centre for Data Ethics and Innovation to “ensure safe, ethical and innovative uses of AI”. AI is also one of the Grand Challenge Areas to “put the UK at the forefront of the industries of the future”, with a current focus on the use of AI techniques to transform the prevention, diagnosis and treatment of chronic diseases by 2030 - this illustrates how governments are trying to encourage the deployment of AI to tackle big social challenges. The European Commission recently published a White Paper on AI that seeks to enable scientific and economic AI breakthroughs while maintaining trust and respecting rights.
These ambitious policy agendas face formidable informational challenges: how can policymakers choose what rapidly evolving AI technologies and applications to support and how do they intervene in their development without distorting resource allocation and avoiding capture by vested interests? This question requires an urgent answer: according to Collingridge’s dilemma, by the time there is enough information about a technology to intervene in its development it is generally too late to do anything. In the face of these difficulties, some think-tanks have called for an approach to policy that allows permissionless innovation by the private sector, or even outsourcing policy activities such as AI regulation to private sector companies with better information and incentives.
A no-regret activity in this context would be to improve the amount and quality of the evidence available about the state and evolution of AI in order to make sure that whatever policies are put in place are well informed, and to monitor closely those AI markets where/if policymakers decide to adopt a more laissez faire approach. By removing uncertainty, this evidence can also help the private sector make better decisions. In the rest of this blog, I highlight some areas of AI that would benefit from more evidence and indicators, and summarise Nesta’s programme of work to build better intelligence about AI.
Simple measures of AI activity, such as for example counts of AI research papers and patents and aggregate statistics about talent supply and the level of investment on AI startups, while useful, hide important information required to inform a policy agenda for inclusive, societally beneficial AI. For example, they tell us little about:
Creating these indicators requires new data sources that capture AI R&D activity wherever it happens, from open research and open source repositories that document advances in AI R&D close to real-time to online job ads that tell us about how businesses in different sectors are building-up their AI capabilities.
Data science methods are also needed to collect, enrich and process this data. For example, natural language processing (NLP) methods can help to identify activities related to AI in research papers, patent and grant abstracts, and topic modelling can generate measures of the thematic composition of the field, distinguishing between various techniques and application domains. We can use network science to measure the novelty of research and innovation activities based on the concepts that they combine. In addition to generating more relevant and inclusive indicators of AI R&D, this rich data can be published in interactive formats such as data visualisations and dashboards where users can explore their own questions.
These are precisely the methods that Nesta’s innovation mapping team specialises on.
In the last two years, we have published several papers, reports and tools to develop indicators and maps along the lines above. I summarise them in turn:
In order to build trust around the experimental data sources and indicators we use, we have made our data and code available in several projects. For example, all the code and data for the Deep Learning Deep Change paper is available here, as well as the code for arXlive. Our data has already been used in Stanford HAI’s AI Index, and in a map of global AI research developed by the World Economic Forum.
We are currently working in new analyses of AI in the UK Creative Industries and of research in the intersection between AI and Collective Intelligence in collaboration with Nesta's Centre for Collective Intelligence Design. We are also further analysing the arXiv data with the goal of measuring private sector participation in AI research and its link with the evolution of thematic diversity in AI research, studying regional concentration of AI research globally, and monitoring global trends in surveillance-enabling AI technologies such as facial recognition and pedestrian identification. We will be publishing our results in the coming months, together with the code and data that are required to review and reproduce this work.
Ultimately, we believe that AI measurement efforts would benefit from more coordination, mission-orientation, standardisation and automation.
Coordination would help researchers respond to policy priorities faster than is possible through the decentralised, bottom-up processes that often guide the behaviour of scholarly communities. Mission-orientation, that is, a clear focus on the evidence needs of policymakers and practitioners would encourage the kind of interdisciplinary collaborations required to shed light on AI dynamics in the intersection of science, technology, economics and society, while also advancing theory. Standardisation, for example around how AI is defined and how these definitions are operationalised would make it possible to compare and triangulate different studies more easily than is possible today. Automation, for example through the development of an open source infrastructure to collect, enrich and analyse data about AI would improve efficiency, enhance reproducibility and make it possible to quickly combine data and methods to triangulate results and explore new questions (we called for something similar in the field of the Science of Science last year). This infrastructure could also be used to automate the creation of open intelligence about AI, making it possible to monitor the evolution of the field closer to real-time.
Two interesting models for this effort are the Research on Research Institute (RoRI), an international consortium of research funders, academics and technologists working to champion transformative and translational research on research, and the Policy and Evidence Centre, a Nesta-led institute bringing together academics from different disciplines to improve the evidence base for creative industries policy in the UK. Perhaps we need similar initiatives, but focused on AI.
We believe that only concerted, interdisciplinary measurement and analysis efforts along these lines will give us a shared understanding of the state of AI and its evolution to ensure the effectiveness of policies to steer AI in a direction where its value is realised for the benefit of more people.
Get in touch here if you want to find out more or collaborate with us to advance this agenda.