Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.
In this Nesta talks to… Jason Matheny was in conversation with Nesta’s Laurie Smith to discuss the role of emerging technologies and the potential consequences of and ethics behind using them.
As the CEO of the RAND Corporation, Jason’s interest in existential risks and emerging technologies has led him to become increasingly concerned about the potential risks this poses. One specific concern is the lack of governance in synthetic biology. Due to the pandemic policymakers' have been focused on COVID-19, making it easier to synthesise dangerous viruses. Therefore, Jason emphasised the need for policymakers to keep up with technological advancements and proposed building institutional capacity within government to understand and address this. Bringing attention to future risks can be challenging, given the short-term focus of policymakers. Therefore, creating specialised policy teams that identify and address long-term risks alongside proactive planning would help to mitigate potential risks associated with emerging technologies.
Regarding generative AI models like ChatGPT, Jason doesn't see them as posing an existential risk currently but acknowledges that these tools could facilitate the creation of cyber and bioweapons in the future. To mitigate risks, Jason advocates for good governance, adaptable policies and the development of expertise at the intersection of technology and policy.
Laurie and Jason also touched on the risks to democratic governance posed by social media platforms and information bubbles. Jason highlighted the importance of addressing political polarisation and promoting a sense of shared responsibility within democracies as a way to overcome this. They further explored the impact of generative AI on democracy and truth, agreeing that while AI models can provide positive effects for society, such as facilitating self-education, there are concerns about the potential for misinformation and manipulation.
As we live in a time of rapid technological change, the risks of privacy, security, democracy and truth are ever growing. Therefore, practices like foresight and superforecasting can prevent us from sleepwalking into technological and social catastrophe.
Laurie Smith: Hello, and welcome to our latest "Nesta talks to," our conversation event series for today's most exciting thinkers on the big topics related to our missions and innovation methods. My name is Laurie Smith, and I lead much of the research in the discovery hub at Nesta, the UK'S innovation agency for social good.
We design, test, and scale solutions to society's biggest problems. Our three missions are to help people live healthy lives, create a sustainable future where the economy works for both people and the planet, and give every child a fairer start.
The discovery hub is responsible for helping bring the outside into the organisation by considering the consequences of external trends for Nesta's work, which provides the link to the subjects of today's discussion-- emerging technologies, democracy, truth, and the future. And here to talk to us about this topic is Jason Matheny.
Jason Matheny: Hey, Laurie.
Laurie Smith: Hello.
Jason Matheny: Good to see you.
Laurie Smith: Jason is president and chief executive officer of the RAND Corporation, a nonprofit, nonpartisan research organisation that helps improve policy and decision-making through research and analysis. Prior to becoming RAND's president and CEO in July 2022, he led White House policy on technology and national security, the National Security Council, and the Office of Science and Technology Policy.
Jason has served on many nonpartisan boards and committees, including the National Security Commission on Artificial Intelligence, to which he was appointed by Congress in 2018. Welcome, Jason.
So before I started, I want to invite our audience to join the conversation in the comments box on the right-hand side of your screen and ask any questions throughout the event. Closed captions can be accessed via the linked in live stream.
So let's start with existential risk and emerging technologies, which have received considerable attention of late following advances in AI and the pandemic and I think have been a theme of your career, Jason. What got you involved in these topics?
Jason Matheny: Yeah. First, thanks, Laurie. It's great to talk. I'm really looking forward to this discussion. I think what got me started in existential risk was in 2002, I was working in India on traditional infectious disease control-- malaria, HIV, tuberculosis. My background was in epidemiology, and I had expected to work in international health for the rest of my career.
But in 2002, the first virus was synthesised from scratch, and it was sort of an "oh, crap" moment for the public health community because we realised at that point, we didn't just have to deal with natural infectious diseases. We might also have to deal with artificial infectious diseases, either by accident or by some intentional outbreak.
And I then moved from working on traditional infectious diseases to working in national security, biosecurity. And I think over the next few years, really became increasingly concerned about the trajectory of emerging technologies and the risks that they might pose. Enormous upside potential, of course, from technologies like synthetic biology and AI, but also with a need to prevent and mitigate risks that were going to come alongside those benefits.
Laurie Smith: And you mentioned the first synthetic virus and synthetic biology. What's your take on where we are with that now? That was some years ago. Where do you think we are with that issue?
Jason Matheny: I think we're in a pretty risky period right now. There's actually very little governance of synthetic biology. If you want to synthesise the smallpox virus, there's very little to stop you. You can buy a DNA synthesiser. The information that's needed, the blueprint for the virus is available online. The recipe for how you synthesise viruses is also available. You do need technical training, but it's technical training that is provided in universities.
And in some parts of the world, you can probably even find a DNA synthesis provider to do much of the work for you. In fact, all compliance with synthesis screening right now is voluntary, and it's frequently incomplete. So I think we're in a risky window right now.
Laurie Smith: And do you think things have improved at all since the pandemic, given there are some sort of concern that it might have come from a lab, or even if it didn't, there's still that sort of risk? Has anything changed?
Jason Matheny: I know there was Toby Ord who's another sort of existential risk expert who's saying that there's a Biological Weapons Convention [INAUDIBLE] has an annual budget similar to the average McDonald's restaurant, which seems somewhat worrying given the scale of the challenge.
Yeah. Unfortunately, I don't think things have improved. The policy community I think has felt pandemic fatigue from budgets focused on addressing COVID. And that made it actually harder to get policymaker attention on preventing future biological events, whether natural, accidental or intentional.
I think Toby is still accurate on that point about the tiny budget for the Biological Weapons Convention. I think it's one of the more striking statistics in his excellent book, "The Precipice."
Laurie Smith: And you mentioned how easy it is to get the necessary tools to do the synthetic biology. And this raises the sort of idea which I'm sure you've heard of-- garage biology, which is a little bit like the growth of computing in the early '80s and late 1970s. And do you see things, patterns going that direction at all?
Jason Matheny: Yeah, it's hard to tell what it'll look like 10 years from now, where it might be less of a hobbyist discipline and more consolidation in DNA synthesis activity. That would have upsides and downsides. There's a lot of innovation that's happening at the individual lab level or the hobbyist level. At the same time, that means that there's a lot of things that could go wrong, either accidentally or intentionally.
I do think it's really important for there to be some form of DNA synthesis screening and some level of governance of where DNA synthesis machines are going.
Sometimes I think about the comparison between biology or biological weapons and nuclear weapons. One way in which biological weapons are a lot more dangerous than nuclear weapons is that they self-replicate. So if you have a relatively small arsenal of biological weapons, it's pretty trivial to scale it up, to make it much larger.
Effectively because of the self-replication of biological weapons, the world becomes the weapon. The number of people who end up serving as hosts of the weapon itself and then vectors for that weapon ends up being the reason that these weapons are so dangerous.
And we don't have right now especially effective defences that scale as well as the weapons themselves. So I think we're quite vulnerable.
Laurie Smith: Really interesting. And so moving on to risk others have been talking about. I'm sure you're aware of ChatGPT and generative AI models. What do you think are, if any of the existential risks that might be posed by those tools?
Jason Matheny: Yeah. I doubt that current generative AI tools like ChatGPT or Claude or others that are available today pose an existential risk. I do think some of those tools could make it easier to build cyber weapons, possibly easier to build bioweapons, but I think that most of the risk is in the future.
That if we just think about the overall trajectory of these systems, it's likely that they'll increase in capability. That means that the benefits from these tools could substantially increase. It also means that the risk from those tools could substantially increase. So again, I think this is an area where we'll need good governance.
And it's hard often for policy to keep up with technological change. Technological change tends to happen much faster than policy change. So I think this is an area where we'll just need to be especially thoughtful about how to future-proof certain kinds of policies to make sure that they can adapt as the technologies change.
Laurie Smith: I suppose that leads to the idea of you talking about the need of policy to keep up with technology and technologies often moving faster. There's an idea around some anticipatory regulation or anticipatory governance. Do you see that as a way of managing that sort of problem? Or do you think we'd have to go down a different sort of route?
Jason Matheny: Yeah. I think with-- there's the line attributed to multiple authors that prediction is hard, especially about the future. But I think it is true that making technology forecasts is especially difficult. And there's been empirical work by the [? Tory ?] group and others looking at the history of technology forecasting. We tend not to be very accurate after more than a 5-year time horizon.
So what you I think want in cases like this, like in the case of AI or synthetic biology, you want to create a platform for policy that can adapt as the technology itself evolves. So one, for example, is building up institutional capacity within government for people who understand the technology.
In the US government, this has been a challenge. Most of our policy experts have tended to come from a legal background. Very smart, very thoughtful, but often don't have a strong grounding in the technologies. Our technologists in the US very frequently don't want to work on policy.
So I think one robustly good thing that can be future-proof is build an institutional capacity of people who work at the intersection of technology and policy, create a career track for people who want to pursue that. A second is ensuring that our policy schools are doing more to generate that pipeline and support it. Third is figuring out mechanisms that we can bring in talent from technology into policy roles.
We have something in the US called the Intergovernmental Personnel Act which allows a freer exchange of talent across the private sector and public sector. I think that's really important. And then have a larger pool of advisors and experts who can consult with appropriate guardrails against conflicts of interest.
I think ultimately, good governance comes down to having really thoughtful people who are thinking about these hard policy problems. So personnel is the thing that I would prioritise.
Laurie Smith: Really helpful. And we touched on synthetic biology, and we touched on the risks from AI, but what's the existential or at least significant risk that we should be talking about but we're not? I suppose in the same way with a pandemic. People were aware of that for some time, but it wasn't in the national conscience. But is there something we should be thinking about now that no one's chatting about?
Jason Matheny: Yeah. It's a great question. To think of another historical example, I think nobody was thinking about nuclear weapons as an existential threat until the 1930s, maybe with the exception of HG Wells.
And so from the mid part of the 1930s in a very private way until the middle of the 1940s, there were a small number of people who thought, OK, this actually could be an existential threat, and then it became a more public recognition of that as an existential threat.
So it is likely that there are either future technologies that are not yet visible to us that could pose existential threats or other categories of risk, possibly non technological. For one example, I do think that there's kinds of risks to democratic governance that could pose existential threats to the way that democracies function.
We're seeing increased political polarisation in the US and in the UK and in several other democratic states, increased political paralysis, cultural polarisation. And I think some of the drivers of that are technological things, like certain kinds of social media platforms and information bubbles, personalised search.
But part of it also is just a lack of shared feeling that we're in this together. And those kinds of cultural trends that lead to existential threats to governance, I don't know enough about what drives that and ultimately how to mitigate it. But it's something I think poses a real risk to shared governance.
Laurie Smith: And how would you get-- you've obviously worked with lots of very senior decision makers, and you're obviously a senior decision maker yourself, but given with those risks that aren't necessarily yet on people's radar but are important, how do you get people to listen? Are there systems or structures or methods people can use to get more attention on those issues, particularly when politicians are battered with the day to day of what's happening in the news right at the moment?
Jason Matheny: Yeah. That's a good question. One of my colleagues at RAND, Andy Hoehn, just wrote a book called "The Age of Danger," a part of which discusses this general problem of how do you get policymakers to pay attention to something that might be just over the horizon as opposed to the tyranny of the immediate, the things that are completely dominating policymakers' attention today.
It's quite difficult because there are things that are urgent today that are incredibly important. There are things that might be even more important that are years hence or even decades hence that we need to start preparing for. But freeing up the political attention span for those kinds of risks is deeply challenging.
I think that it does make sense to have certain parts of the policy community, though, whose full-time job is to think about those things. In the United States, we have as part of the intelligence community the National Intelligence Council, within which there is this strategic futures group that thinks about longer-term risks.
Within the Defence Department, we have something called the Office of Net Assessment, which is sort of a think tank within our Pentagon that thinks about longer-term risks. I think these are very small functions, a handful of people. I think we need more people who think about those things, because often the work that we need to do to prevent or mitigate a risk that might be coming 10 years from now is something we actually need to start on today.
There's a lot that we could have done to reduce the risk from synthetic biology years ago when we could sort of see what was going to be coming. We could have baked in security and safety. We could have brought in much more expertise, for example, from the biotechnology community.
Same right now for AI. I think we can see in general where this technology might be headed. We can see that it will be profoundly important to the future, and we need to start planning for that. But there's very few parts of government that right now have the capacity to do that sort of planning.
Laurie Smith: And pivoting from emerging technologies, which we might come back to with the questions from the audience, I know that you're interested-- you meant interests of democracy and truth. So getting [INAUDIBLE], what impacts do you think generative AI is going to have on democracy and truth, if any?
Jason Matheny: Yeah. So first, there will be some positive effects. When I use some of the large language models today, I use it to tutor myself about something that I'm stupid about. I just want to get smarter on topic x. I'll ask, I'll generate a prompt for one of the models to explain something to me.
And it doesn't always get it perfectly right. There's still these hallucination problems and other errors. But it's a lot better than what I could have done myself either by trying to synthesise a bunch of text or by even doing some real digging on Wikipedia or others. I find it's just a really good way of synthesising content.
I can say, summarise deterrence theory in 10 paragraphs, and it does a pretty good job of that. So I think we have, in the near term, digital tutors that will be accessible to us that can lead to an even better-informed citizenry. And I think that's so fundamental to democratic function.
But we also have with these tools the potential to produce disinformation at scale. So fake text, fake photos, fake audio, fake video. I think that will be a challenge. I don't think that this will be a fatal challenge to democracy. I think that with a few high-profile instances of disinformation, of it being really disruptive, I think it is likely to lead to either regulations or other policy levers that require things like watermarking in synthetic media for assessing provenance.
As an analogy, we still deal with the problem of counterfeit money. And it's a problem, it's a nuisance, but it's not existential. Actually except for North Korea. It's existential for North Korea because North Korea's economy actually depends on counterfeit currencies. But for the rest of the world, it's a nuisance because we have relatively good mechanisms for spotting counterfeits, and those mechanisms were built into the generation of currency.
I think it's likely that we'll have similar mechanisms at some point to assert the authenticity of media. But I think we're going to have some rocky times ahead. I think in this period where it's unclear how the authenticity of media will be asserted, we're likely to see some high-profile examples of disinformation that's using these generative AI systems.
Laurie Smith: And so panning out from AI specifically to digital technologies in general, they're often blamed as being the cause of the breakdown or starting to break down democracy and truth in countries like the United Kingdom, the United States. What extent do you think they're the actual cause of the problem or simply a symptom or accelerant of some other underlying cause? If there is another underlying cause or causes, what do you think they might be?
Jason Matheny: Yeah, I do think that social media platforms play a big role in this. Personalised search, too. The Daily Me. It's possible to create these information bubbles for ourselves that are completely insulated from one another, and I'm guilty of this too. There's a set of news outlets that I follow and a set of blogs that I follow.
And I think that the downside of that is that we don't have a shared informational context in which citizens are joined with a common sense of what's happening in the world and common standards for evidence.
RAND has for several years had a research programme on what we call truth decay, which is a sort of erosion of norms around evidence used in policy debates. And I think one of the causes of truth decay are these information bubbles.
But I do think that it's an amplifier for other kinds of division within society. I think part of that is the way in which people identify themselves politically has to do with affiliation with certain kinds of tribal commitments like, well, I'm in tribe x, and tribe x believes such and such. Tribe y believes something else, and I'm not part of that tribe.
And I think that when those identities are formed around things that make compromise very difficult, that's when democracy really gets paralysed. And I think we've increasingly seen that. So the amplification of these tribal identities that some of these tech platforms have created I think really does present a challenge to democracy.
It's not the only amplifier for this. I think we've also seen a change in journalistic standards. I think we've seen a decrease in civics education. And I think also geography plays a role. There's a certain way in which even the physical location of people has reduced the level of contact that people have with folks outside of their tribe. I think all of those are likely to contribute to this phenomenon that we're seeing in several democracies.
Laurie Smith: And with the [INAUDIBLE] of technology, keeping with this topic of democracy, the US is often seen as a home of modern democracy but now faces growing competition for an increasingly autocratic China. How do you think this is going to play out, and what does this struggle mean for democracy and truth around the world?
Jason Matheny: Yeah. I think first democracies need to demonstrate that they can deliver for all of their people, not just the wealthiest half or 3/4. When China's diplomats promote China's system of governance in the global south, for example, one of the talking points is, hey, do you really want to be like the Western democracies that have left a large part of their society behind, that have internal struggles, are now politically polarised?
So I think that democracies need to demonstrate that our systems can function for everybody. I do think that democracies are substantially better for the world for the long-term future, both because of the freedom and self-determination that they provide to individuals, but also I think democracies just have better error-correction systems that can prevent large-scale catastrophes for society.
Democracies are imperfect. We make lots of mistakes. We fumble all the time. But our mistakes tend not to be catastrophic at the level of an entire country causing tens of millions of deaths. We tend to have avoided things like the Great Leap Forward where tens of millions of people died in China because of a few autocrats making just terrible decisions or the Stalinist purges killing tens of millions of people because of a few autocrats making terrible decisions.
So I think the distributed decision-making within democracies tends to avoid these extreme catastrophes that we see within autocratic systems. And that will be particularly important to have that kind of error correction in an age when we have things like AI and synthetic biology that are capable of existential catastrophes.
I also do worry a lot about an autocratic system that uses AI for surveillance and social control because I think that that allows a totalitarian system to actually be scalable and cost-effective in a way that could be permanent. It could cause civilizational lock-in, effectively. A system that is incapable of change or self-correction forever.
And I think the prospect of having a civilization that doesn't change, doesn't correct, doesn't make itself better over time is just a tragic waste of human potential.
Laurie Smith: Moving on to another one of your interests and also an area where RAND has obviously a long history and reputation about the future and thinking about the future and strategic foresight. What do you think are the most exciting developments in this field at the moment?
Yeah. One of the things I've worked on for a while is on using crowdsourcing for forecasts. And there's a few parts of crowdsourcing that I really like. First, the fact that it's participatory, that it's inherently democratic of engaging as many people as possible in thinking about what the future might look like and how we address the governance challenges in the future.
And then there's an empirical reason to crowdsourcing, which is that it tends to produce more accurate forecasts. So in a job that I had at IARPA, which was the research arm of the intelligence community in the United States, we ran a large forecasting tournament over four years.
Collected millions of forecasts for geopolitical events, things like would there be a battle in a particular place, would there be a foreign election that would go one way versus another, would there be a weapons test, economic changes, what would be inflation rates. All sorts of questions.
And I think at the height of this experiment, about 30,000 people participated. Some from the government, some people outside of the government, some people who were real professional experts, some people who were hobbyists and just doing this in their free time.
And there were a few really interesting findings. One is if you take the unweighted average of hundreds of judgments about a particular question, that's really hard to beat. The unweighted average tends to be a really robust estimate of some event.
And the other thing was there are ways of increasing the accuracy even more if you give greater weight to people who have historically been more accurate or who tend to have a set of approaches to thinking about future events, like they tend to be ceaselessly self critical, they tend to update their judgments when presented with new information, they tend not to be, like traditional pundits, of being very overconfident.
Instead, they tend to be pretty humble and adjust their judgments when they see reason to. And the results of some of that work are summarised in a book by Phil Tetlock called "Super Forecasting," but there's other descriptions of this. There's some "Economist" articles that talk about this work.
And there's an effort in the UK called Cosmic Bazaar based on this research. And I'm overall very optimistic about the application of doing more crowdsourced forecasting. I think it's one of the things that we should be using much more often. I try to use it for myself personally.
Like when I have a major decision to make, I poll not just my friends but I also will ask some of the people who I just think are really thoughtful, whose judgement I'd like. I ask them for probabilities, will I think this was a good decision two years from now? And then I take the average. And I've kept score over the last 15 years that I've done this, and the unweighted average has been more accurate than either my own judgement or individual judgments.
Laurie Smith: I like the idea that humility is an important skill in forecasting. Now I'll turn things over to the audience. I've got some questions that have come in. One was about-- there's someone called Thomas Carroll who is joining from the UK NHS, which is our National Health Service, and they're interested in hearing about the impact on health systems and policy.
I presume that's probably in reference-- I think that came early, so it might have been in reference to synthetic biology and artificial intelligence. What are your thoughts about the impact on health of those two technologies?
Jason Matheny: Great upside potential. I think where an early expression of the impact of AI on biomedicine would be DeepMind's AlphaFold, which I think is one of the more stunning breakthroughs in an AI application is solving these kinds of protein folding problems for thousands of proteins. And that has a range of important biomedical applications.
We could also see increased application of AI to diagnostics. On synthetic biology, the ability to synthesise organisms that then can produce useful things for us, including medicines but also other materials and be able to rapidly design and manufacture new vaccines. So for example, mRNA vaccines are a great new tool for infectious disease control.
On the negative side, AI could be used to design pathogens that are much worse than those found in nature or even in existing biological weapons programmes, which unfortunately there are several that still exist in the world in countries that do not practise great lab security and don't have really good forms of governance. So I think that's quite worrisome. And I think AI could amplify some of those weapon programmes.
Synthetic biology, the same. It gives it a toolkit for creating viruses and bacteria that combine traits that could be quite potent. Combine, say for example, very high transmissibility, high lethality, a longer incubation period that's pretty symptomatic so that you have biological weapons that could be much more severe than natural pathogens.
Laurie Smith: And we've got a question here from Alison Driver, who I think you raised your work programme on truth decay. So "how can we counteract levels of distrust across communities"?
Jason Matheny: Yeah. I wish I knew. I think we're trying a few things that at RAND just to understand what the facts are about communities and distrust within or across communities. Something that we have at RAND, it's called the American Life Panel, which is a panel survey that we use, we're going to start leveraging for this purpose of understanding trust and mistrust.
Focus groups. There's a couple of brilliant researchers at RAND who have really been thinking about the experimental or research design that we need in order to get at this problem, but I don't think we have answers yet. I think it's a deeply important question, though. I think a lot of this does come down to trust and mistrust across communities.
Laurie Smith: And so relating to that-- it's a question that was in chat. It's prerogative here. There's a question about this sort of trust, and one of the things that's been leveraged about trust is the story emerging technologies can generate.
So I think-- was it Yuval Noah Harari in a recent talk about AI and the future of humanity talked about AI's potential ability to create stories that could persuade us of things that aren't necessarily true and can indirectly undermine society. What credence do you give to that sort of concern?
Jason Matheny: Yeah. In the same way that we see that Personalised advertising on various web platforms has kind of hitchhiked on errors in human psychology, that we're susceptible to various kinds of seduction by information. There are just certain things that we're going to be easily convinced of and we're not very sceptical of.
And in the same way that advertising exploits some of those cognitive biases or heuristics, I think increasingly we could see longer narratives that are automatically generated to exploit some of those biases or vulnerabilities, and I think that is a real risk.
Actually, I had a conversation once with some technologists about what they thought might be existential threats that we weren't paying sufficient attention to, and one of them said the lotus-eaters. That basically we could, as a society, just sort of feed ourselves with information that made us feel very happy and very satisfied but wasn't actually leading to any productive advance in society.
We could either put on our VR goggles and just be roaming around and entertaining ourselves without actually doing anything productive or do the equivalent of that by absorbing other kinds of media that have been personally created for each of us in ways that optimise our euphoria but don't actually lead to any productive social change and might actually repress effective social change.
So this is, I guess, a sort of Aldous Huxley view of what a totalitarian regime could be. It doesn't require necessarily people being put in prisons. It may just be one where the society is effectively comatose because of being fed this steady diet of personalised entertainment.
Laurie Smith: And so moving a bit-- so we've got another question here. Moving away from that bigger existential risk to more immediate challenges around generative AI. What do you think-- "what are your thoughts on the social risks of tools such as ChatGPT and Bard and others? And are there any specific policies you think could mitigate those risks"?
Jason Matheny: Yeah. Some of the societal risks are ones that involve the application of large models to classes of other technologies that can be destructive, so cyber weapons, and biological weapons. I think both of those could be more capable and more accessible because of these large AI models.
I think it's unlikely that current large models like GPT or Claude or a handful of others would allow a complete amateur to launch a major cyber-attack. I think it's probably that an already sophisticated actor could do things at a greater scale or more quickly using some of these tools.
But I think it is conceivable that in the next several years, as these tools become even more capable, that that will change, that fairly amateur actors could actually be doing things that are fairly sophisticated. And that, I think, is risky. Yeah. So I think those are some of the societal risks that I'm most anxious about.
Laurie Smith: And what's your view-- we've got a question here. "What's your view on regulating AI"? What should governments or supranational bodies do about it?
Jason Matheny: I think we do need guardrails. And I've been impressed by the AI labs themselves saying we think we need guardrails. Like, please, government, give us guardrails. Including thinking about the supply chain that ultimately makes these models possible. So everything from the chips that go into the data centres that are used to train the models, the data centres themselves should know your customer screening.
Do the cloud providers actually check on who is using their infrastructure to train a large model? Have the models undergone safety testing and red teaming to see ways that they could be misused or ways in which the systems could be misaligned? I was impressed that some of the recent large models had a fair amount of red teaming and safety testing, leveraging work of places like the Alignment Research Centre.
And then I think we should be having a real debate in policy around open-source models. On the one hand, open-source software can be a great tool for democratising tech and for having lots of eyeballs to look at errors and make the tech stronger and more robust. But in other ways, releasing a large language model into the wild without understanding first enough about its safety or security could present enormous risks to society in ways that we might not understand.
We as a society have decided we're not going to open source nuclear weapons or nuclear blueprints, and I think it might be more analogous to that than to say open sourcing a web browser. So I think we'll just need to be thinking really deeply about what the consequences would be of large open-source models.
And what do you think-- there was a question, but it seems to have been gone now. There was one which I thought was a good one, so I'll ask anyways about the risks of mass unemployment from these sorts of technologies and how to handle that with those huge productivity upsides but also many people may risk losing their jobs and all sorts of social dislocation that could happen as a result.
Yeah. The world has gone through significant technological change in the past in which we've had time to adapt. So for example, the introduction of mechanisation to agriculture, the introduction of typewriters, of printing presses, of electrification, of industrialization, of computers. Those have been technological changes that led to significant labour change.
So folks who had grown up-- I think in the United States, about half of the US population in 1900 was employed in agriculture, and now it's around 1% or less. It's not that 49% of the United States is now unemployed. It's just that the daughters and sons of farmers then became something other than a farmer.
I think the challenge now is we're seeing such massive technological change happening so quickly that we might not have the ability to adapt on the time scales that are going to be relevant to a family. And it's going to happen not across generations, but within several years of a single generation.
So I don't know what the best set of policies will be to provide safety nets for that. Proposals like universal basic income. We need to really think about the economics of that, and how would you employ that? Would it be graduated as the technology continues to displace different categories of labour?
I think for a while we'll see that technology is not displacing an entire occupation but reducing the amount of time that is needed in order to perform a task within that occupation. I think we're already seeing that in applications of AI to things like legal discovery. It hasn't reduced the number of lawyers that are needed, but you need a smaller percentage of a particular lawyer's time needed on a particular kind of task.
But I think over time we are going to see categories of labour that shift, and figuring out what will be the appropriate safety net for that is, I think, going to be critical.
Laurie Smith: And do you think-- there's a question here from Antonio Amodio. Hopefully I'm pronouncing that correctly. "Do you think the state of AI technology is sophisticated enough to sway the upcoming US election--" and I suppose there's going to be a UK election in a couple of years as well. Will it sway it in a meaningful way or could it?
Jason Matheny: Yeah, possibly. I do worry a lot about this because I think-- I said that I think at some point we're going to figure out a way to deal with disinformation that's AI generated. I think, though, that that will take a few years. And I'm worried that we won't have already built in those mechanisms for provenance or watermarking in time for this election.
So I think we could see some sophisticated disinformation attacks in the 2024 election in the US. I think that's pretty plausible and worrisome. And unfortunately, the digital forensics that exist right now to really figure out whether something is authentic or not aren't all that reliable, especially with text.
So if somebody wants to do spear phishing either for cyber attacks or for disinformation, that right now is very hard to detect what's been human generated versus automatically generated. But it's possible that we'll see generated media being used in some significant way to try to sway the election.
Laurie Smith: Talking of spear phishing, a colleague of mine flagged a paper where someone wrote, essentially worked out how to do spear phishing to all of Britain's members of parliament and what tools can be used, which opens up a risk as well, because they're almost telling people how to do that publicly.
Maybe going on to a slightly more positive direction, we've got a question here from Milly about how do you stay hopeful in the context of these quite challenging global issues?
Jason Matheny: Yeah. A friend of mine calls me an epochal optimist because I do-- most of my day job is spent thinking about things that aren't all that pleasant. But I really do think if we can avoid massively fudging things up, the future really could be brilliant. We've made a lot of progress as a civilization. It's much better as a human being to be alive today than to be alive a century ago or 1,000 years ago. We're just so much better off.
So many humans are substantially freer, better educated, better fed, better cared for, better loved than we were in years before. And I think things are likely to stay on that track if we just don't completely derail ourselves through one of these existential threats. So that's one thing that makes me hopeful is just that the long arc of history bends towards justice but also bends towards greater prosperity for most of the planet.
The rates of absolute poverty are so much lower today than they were even 50 years ago. It's amazing, the progress. That makes me optimistic. The other thing that makes me optimistic is that I think each generation really does get better, gets more enlightened, gets more thoughtful, more compassionate.
And I think a lot about our treatment of nonhuman animals. I think we've become more thoughtful about that over time. I think that will also continue to get better. So I really am optimistic as long as we can avoid a handful of these really nasty risks ahead.
Laurie Smith: And what are you-- as CEO of RAND, what do you do? You've mentioned your work on truth decay. What are you doing at RAND that can help realise the opportunities around emerging technologies and strengthen our democracy and think about the future in a better way?
Jason Matheny: Yeah. I mean, so much. RAND, it's the world's largest policy research organisation. We have 2,000 people from so many different disciplines working on about 1,000 projects at any given time. And I think one of the things that makes me optimistic about the world is that we have organisations like RAND that are really thinking deeply about these policy challenges and how to address them.
A lot of RAND's work is descriptive. It's sort of just understanding what is the state of the problem, how do we measure it, can we tell whether we're making progress? And then a lot of it is prescriptive. We think that the best ways of solving this particular policy problem would be A, B, or C, and we'll systematically analyse the options and then figure out their costs and benefits, their advantages and disadvantages.
So this analytic approach or evidence-based approach to policy is something that also makes me optimistic, the fact that there are institutions like RAND and others that try to apply the best in science and research and analysis to the most consequential policy problems.
Nesta also gives me a lot of optimism. I've interacted a fair amount with Nesta over the last several years, and I think that one of the things that makes me optimistic is that you will attract so many smart people who want to make the country and the world better.
And this combination of raw intelligence and incredible levels of compassion and a mission focus, I think that's ultimately what's going to make the world better. It has historically been what made the world better. Anyway, I'm optimistic for reasons like that.
Laurie Smith: Well, that's exactly the answer I think I wanted and my colleagues all wanted to hear. So a final question is, you might be aware Nesta recently published a supplement in partnership with the UK policy magazine "Prospect" where we imagined a fictional minister for the future who'd ask doers and thinkers to propose radical solutions, some of the UK and world's biggest challenges.
What would be your proposal to our fictional minister around either emerging technologies or existential risk or democracy or whatever issue you think is important?
Jason Matheny: Yeah. For radical solution-- there's, of course, a lot of planar solutions to our problems like coming up with good regulatory guardrails around some of these especially risky technologies. But for a radical solution, I do think that some form of significant crowdsourcing effort related to foresight and forecasting that tries to get as many participants across UK society as possible.
And it's useful not just for making forecasts that you hope will be accurate but also forecasts that you just think will be informative. Even if they're inaccurate, you get a sense of how beliefs and attitudes across society related to a particular technology vary. And I think that would be really informative.
You'd get a sense of how much optimism or pessimism there is about the future of AI or biotech. You'd get a sense of how it varies geographically, how it varies across the political spectrum, across the kind of information that people absorb. And I think it'll give you also then a better sense of how can we engage more citizens in the kind of governance work that we need to do that's ahead of us.
Laurie Smith: I think my colleagues in Nesta's Centre for Collective Intelligence Design which looks at some super forecasting work, I think inspired by some of the work you're involved in. And I think they'd absolutely love that answer.
Well, I think we're almost at time. So thank you very much for some really interesting thoughts. I hope it's been useful to the audience. And so now we've reached the end of the event, I'd be really grateful if those joining us could perhaps fill in a short survey. The link will be shared in the chat, and it's also available in the event description. As a thank you for filling in the survey, audience members will be entered into a prize draw to win a 50 pounds bookshop.org voucher.
And if you haven't already, please sign up to our newsletters where we'll let you know about upcoming events. I think I'll be talking to James Bridle about his book "Ways of Being" sometime next month. So finally, just thank you again very much to Jason. Really interesting discussion.
Jason Matheny: Thanks, Laurie, for the great questions. These were brilliant. And thanks to all the participants for the questions. This was excellent. Really enjoyed it.
Laurie Smith: Fantastic. Thank you very much indeed. Bye bye.
Jason Matheny: Take care.
[MUSIC PLAYING]
The opinions expressed in this event recording are those of the speaker. For more information, view our full statement on external contributors.
He/Him
Jason Matheny (he/him) is president and chief executive officer of the RAND Corporation, a nonprofit, nonpartisan research organisation that helps improve policy and decisionmaking through research and analysis. Prior to becoming RAND's president and CEO in July 2022, he led White House policy on technology and national security at the National Security Council and the Office of Science and Technology Policy. Matheny has served on many nonpartisan boards and committees, including the National Security Commission on Artificial Intelligence, to which he was appointed by Congress in 2018.
He/Him
Laurie leads on strategic foresight for Nesta. He oversees much of the organisation's research into emerging trends, novel technologies and promising interventions. Prior to joining Nesta he worked at the Royal Society, the UK's national academy of science, where he most recently led on emerging technologies and futures. Previously he worked at the Academy of Medical Sciences on policy around medical science, public health and international health.