Economists have been studying the relationship between technological change, productivity and employment since the beginning of the discipline with Adam Smith’s pin factory. It should therefore not come as a surprise that AI systems able to behave appropriately in a growing number of situations - from driving cars to detecting tumours in medical scans - have caught their attention.
In September 2017, a group of distinguished economists gathered in Toronto to set out a research agenda for the Economics of Artificial Intelligence (AI). They covered questions such as what is economically unique about AI, what will be its impacts, and what are the right policies to enhance its benefits.
I recently had the privilege of attending the third edition of this conference in Toronto, and to witness first-hand how this agenda has evolved in the last two years. In this blog I outline key themes of the conference and relevant papers at four levels: macro, meso (industrial structure), micro and meta (impacts of AI on the data and methods that economics use to study AI). I then outline some gaps in today's Economics of AI agenda that I believe should be addressed in the future, and conclude.
Ajay Agrawal, Joshua Gans and Avi Goldfarb, the convenors of the conference (together with Catherine Tucker), have in previous work described AI systems as ‘prediction machines’ that make predictions cheap and abundant, enabling organisations to make more and better decisions, and even automating some of them. One example of this is Amazon’s recommendation engine, which presents a personalised version of its website to each visitor. That kind of customisation would not be possible without a machine learning system (a type of AI) that predicts automatically what products might be of interest to individual customer based on historical data.
AI systems can in principle be adopted by any sector facing a prediction problem - which is almost anywhere in the economy from agriculture to finance. This widespread relevance has led some economists to herald AI as the latest example of a transformational ‘General Purpose Technology’ that will reshape the economy like the steam engine or the semiconductor did earlier in history.
AI automates and augments decisions in the economy, and this increases productivity. What are the implications for labour and investment?
The dominant framework to analyse the impact of AI on labour is the task-based model developed by Daron Acemoglu and Pascual Restrepo (building on previous work by Joseph Zeira). This model conceives the economy as a big collection of productive tasks. The arrival of AI changes the value and importance of these tasks, impacting on labour demand and other important macroeconomic variables such as the share of income that goes to labour, and inequality (for example, if AI de-skills labour or increases the share of income going to capital - which tends to be concentrated in fewer hands - this is likely to increase income and wealth inequality).
The impact of AI on tasks takes place through four channels:
Considered together, these four channels determine the impact of AI on labour demand. Contrary to the idea of the impending job apocalypse, this model identifies several channels through which AI systems that increase labour productivity could also increase demand for labour. Contrary to previous assumptions by economists that new technology always increases labour demand through augmentation, the task-based model recognises that the net effect of new technology on labour demand could be negative. This could, for example, happen if firms adopt ‘mediocre’ AI systems that are productive enough to displace workers, but not productive enough to increase labour demand through other channels.
Several papers presented in the conference built on these themes:
In order to increase productivity, investments in AI needs to be accompanied by complementary investments in IT infrastructure, skills and business processes. Some of these investments involve the accumulation of ‘intangibles’ such as data, information and knowledge. In contrast to tangible assets like machines or buildings, intangibles are hard to protect, imitate and value, and their creation often involves lengthy and uncertain processes of experimentation and learning-by-doing (much more on this subject here).
Continuing with the example of Amazon, over its history the company has built a tangible data and IT infrastructure complementing its AI systems. At the same time, it has developed processes, practices and a mindset of ‘customer-centrism’ and ‘open interfaces’ between its information systems and those of its vendors and users of its cloud computing services which could be equally important for its success, but very hard to imitate.
According to a 2018 paper by Erik Brynjolfsson and colleagues, the need to accumulate these intangibles across the economy explains why advances in AI are taking so long to materialise in productivity growth or drastic changes in labour demand.
Several papers presented in Toronto this year explored these questions empirically:
Think of a sector like health: the nature of the tasks undertaken in this industry, as well as the availability of data, the scope for changes in business processes and its industrial structure (including levels of competition and entrepreneurship) are completely different from, say, finance or advertising. This means that its rate of AI adoption and its impact will be very different from what happens in those industries. Previous editions of the Economics of AI conference included papers about the impact of AI in sectors such as media or healthcare.
This year considered sector-specific issues in several areas, including R&D and regulation
In the inaugural Economics of AI conference, Cockburn, Henderson and Stern proposed that AI is not just a General Purpose Technology, but also an ‘invention in the methods of invention’ that could greatly improve the productivity of scientific R&D, generating important spillovers for the sectors using that knowledge. One could even argue that the idea of the Singularity is an extreme case of this model where “AI systems that create better ideas” become better at creating “AI systems that create better ideas” in a recursive loop that could lead to exponential growth.
This year, venture capitalist Steve Jurvetson and Abraham Heifts, CEO of Atomwise, a startup that uses AI in drug discovery spoke about how they are already pursuing some of these opportunities in their ventures, and two papers investigated the impact of AI on R&D:
Regulation sets the context and rules of the game that shape the rate and direction of new technologies such as AI. At the same time, regulation is itself an industry whose structure and processes are being transformed by AI systems that accelerate the pace and breadth of change, and create new opportunities to monitor economic activity. Two talks at the conference focused on this two-way street between regulation and AI.
Modern AI systems based on machine learning algorithms that detect patterns in data are often referred to as black boxes because the rationale for their predictions is difficult to explain and understand. Similarly, the firms adopting AI systems look like black boxes to economists adopting a macro view: AI intangibles are after all a broad category of business investments including experiments with various processes, practices and new businesses and organisational models. But what are these firms actually doing when they adopt an AI system, and what are the impacts?
Several papers presented at the conference illustrated how economists are starting to open these organisational black boxes to measure the impact of AI and how it compares with the status quo. As they do this, they are also incorporating into the Economics of AI some of the complex factors that come into play when firms deploy AI systems that do not simply increase the supply of predictions, but also reshape the information environment where other actors (employees, consumers, competitors, the AI systems themselves) make decisions, leading to strategic behaviours and unintended consequences that are generally abstracted in the macro perspective.
AI techniques have much to contribute to economics studies that often seek to detect (causal) patterns in data. Susan Athey surveyed these opportunities In the inaugural Economics of AI conference, with a particular focus on how machine learning can be used to enhance existing econometric methods.
Several papers presented in this year’s conference explored new data sources and methods along these lines, for example using big datasets from LinkedIn and Uber to respectively measure technology adoption and service quality in car rides, and online experiments to test how UberX drivers react to informational nudges. In Nesta, we are analysing open datasets with machine learning methods to map AI research.
Although these methods open up new analytical opportunities, they also raise challenges around reproducibility, particularly when the research relies on proprietary datasets that cannot be shared with other researchers (with the added risk of publication bias if data owners are able to control what findings are released), and ethics, for example around consent for participation in online experiments. In our own work we seek to address these issues by being very transparent with our analyses (for example, the data and code that we used in our AI mapping analysis is available in GitHub) and developing ethical guidelines to inform our data science work.
Having summarised key themes and papers from the conference, I focus on some questions that I felt were missing.
Macro studies of the impact of AI assume AI will increase productivity as long as businesses undertake the necessary complementary investments. They pay little attention to new issues created by AI such as algorithmic manipulation, bias and error, worker non-compliance with AI recommendations, or information asymmetries in AI markets, some of which are already being considered in micro studies from the trenches of AI adoption. These factors could reduce AI’s impact on productivity (making it mediocre and therefore predominantly labour displacing), increase the need to invest in new complements such as AI supervision and moderation, hinder trade in AI 'lemons' and have important distributional implications, for example through algorithmic discrimination of vulnerable groups. Macro research on AI should start to consider explicitly these complex aspects of AI adoption and impact, rather than hiding them in the black box of AI-complementing intangible investments and/or assuming that they are somehow exogenous to AI deployment. As an example, in previous work I started to sketch what such a model could look like if we take into account the risk of algorithmic error in different industries, and the investments in human supervision required to manage it.
In general, the research presented at the Economics of AI conference modelled AI as an exogenous shock to the economy, in some cases explicitly as with Daniel Rock’s study of the impact of TensorFlow's release on firms' value. Yet AI progress is, itself, an economic process whose analysis should be part of the Economics of AI agenda.
In his conference dinner speech, Jack Clark from OpenAI described key trends in AI R&D: we are witnessing an ‘industrialisation of AI’ as corporate labs, big datasets and large scale IT infrastructures become more important in AI research, and at the same time a ‘democratisation of AI’ as open source software, open data and cloud computing services make it easier to recombine and deploy state of the art AI systems. These changes have important implications for AI adoption and impacts. For example, the fact that researchers in academia increasingly need to collaborate with the private sector in order to access the computational infrastructure required to train state of the art AI systems could diminish spillovers from this research. Meanwhile, the diffusion of AI research through open channels creates significant challenges for regulators who need to monitor compliance in a context where adopting 'dual use' AI technologies is as simple as downloading and installing some software from a coding repository like GitHub. Few if any of the papers presented at the conference addressed these topics.
Future work could fill these gaps by developing formal models of AI progress through an AI production function that takes inputs such as data, software, computational infrastructure and skilled labour to produce AI systems with a level of performance. In this paper, Miles Brundage started outlining qualitatively what that model could look like. This model could be operationalised using data from open and web sources and initiatives to measure AI progress from EEF and the Papers with Code project in order to study the structure, composition and productivity of the AI industry, and its supply of AI technologies and knowledge to other sectors. Recent work by Felten, Raj and Seamans where they use the EFF indicators to link advances in AI technologies with jobs at risk of automation illustrates how this kind of analysis could help forecast the economic impacts of AI progress and inform policy.
Perhaps unsurprisingly given the point above, most of the research presented at the conference adopted a 'monolithic' definition of AI that equates it with the deep learning techniques currently dominating the field. This neglects concerns about the lack of robustness, explainability, data efficiency and environmental sustainability of deep learning algorithms, and the fact that alternative AI research and technological trajectories could be feasible and perhaps desirable. However, as Daron Acemoglu showed some time ago, the market will undersupply alternatives to a dominant technology if researchers are not able to capture the benefits of sustaining technological diversity. Acemoglu pointed out that maintaining diversity in researchers’ capabilities and beliefs and providing public funding for less commercially oriented alternatives are two potential strategies to bring levels of technological diversity closer to what is socially optimal.
Could lack of technological diversity become a problem in the AI field? Lack of diversity in the AI research workforce, and the increasing influence of the private sector in AI research agendas through the aforementioned industrialisation of AI research give reason for concern but the evidence base is lacking. More research is needed to measure AI's technological diversity and how it is shaped by the goals, preferences and agendas of the scientists, engineers and organisations involved in it. This is an active area of research at Nesta where we will be publishing some findings soon.
In the inaugural Economics of AI conference, Tratjenberg and Korinek and Stiglitz asked who will benefit and who will suffer when AI arrives, whether AI deployment could become politically unacceptable, and what policies should be put in place to minimise the societal costs of AI when it is deployed. More recently, Daron Acemoglu and Pascual Restrepo expressed concerns that the AI industry might be building ‘the wrong kind of AI’ because it does not internalise negative externalities from AI deployment (eg. labour market disruption) and because some of its leaders are biased in favour of mass automation regardless of its risks. These important questions were largely absent from the debate in Toronto, yet economists need to formalise and operationalise models of the distributional impacts of AI and its externalities in order to inform policies to ensure that its economic benefits are widely shared and reduce the risk of a public backlash against AI.
For me, the biggest takeaway from this year's Economics of AI conference was that AI impacts will be more complex and take longer to appear than what some newspaper headlines might lead us to expect: jobs will evolve and adapt in response to AI systems rather than disappearing completely. Firms will carry out organisational experiments to discover what applications of AI create most value. In some cases the experiments will fail, or prove that the adoption of AI in certain circumstances is uneconomical. Some firms will learn from these failures and others will try again. Skills shortages, regulation and consumer perceptions will slow down the adoption of certain AI systems (in some cases for good, in some cases for ill). The adoption of AI in an industry or firm will create sub-industries intent on manipulating it and lead to unexpected outcomes, and new changes in response.
In other words, the future of AI in the economy will resemble the Internet more than Skynet, the AI system that decides to destroy humanity (and its economy) in The Terminator: it will be complicated. Prediction machines not only increase the amount of decisions we are able to make based on AI recommendations, but also the amount of decisions that we need to make, as participants in the economy and as a society, about what AI technologies to develop, where to adopt them and how, and how to manage their impacts. It is a very good thing that economists are doing research to inform these decisions and discussing their findings close to real-time in fora such as the Economics of AI conference. I look forward to seeing how this research agenda evolves in the future (also with Nesta's contributions), and how its findings inform AI policies to ensure that this powerful technology is deployed for the benefit of more people.