Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.
This event took place on Monday 26 June. You can watch the recording below.
In this Nesta talks to… James Bridle, writer, artist, and technologist was in conversation with Nesta’s Laurie Smith to explore the concept of intelligence and its implications for our humanity in a technology-driven world.
Planetary intelligence is a trending topic which Bridle explores in their book Ways of Being, discussing how artificial intelligence (AI) and emerging technologies relate to our understanding of intelligence and the environment. James suggests that intelligence is not solely a human attribute but can be observed in various forms and relationships. They emphasise that intelligence is dynamic, embodied, and relational and challenges the notion that human intelligence is unique. James explores examples of intelligence found in nature, mentioning the "wood wide web" where trees communicate and share resources through underground fungal networks. By examining such examples James emphasised the importance of recognizing the diverse intelligences present in the world, often overshadowed by our focus on AI.
On the concept of collective intelligence, Laurie and James discussed how AI algorithms can offer alternative search patterns and problem-solving approaches. For instance, octopi exhibit what they call “confederated intelligence” within a single body, while bees engage in a voting democratic process for decision-making. By combining human intuition with AI's expansive search capabilities, novel and unexpected solutions can emerge. For example, citizens' assemblies, randomly selected from the population, showcase collective intelligence in governance, leading to innovative and inclusive decision-making.
Laurie and James also touched on the idea of acknowledging and engaging with non-human intelligence in politics. Bridle suggests exploring ways to represent and advocate for non-human entities which would require recognizing and granting equal rights to all beings. Overcoming the notion of human superiority is vital to address the ecological crisis and acknowledge our interdependence with other species. Therefore, a cultural shift towards respecting and preserving diverse ecologies is essential for safeguarding our own future.
From planetary intelligence to collective decision-making, recognizing diverse intelligences and engaging with non-human perspectives is crucial for developing our relationship with technology and the natural world.
Laurie Smith: So welcome to our latest Nesta talks to - our conversation event series for today's most exciting thinkers on the big topics related to our missions in innovation methods. My name is Laurie Smith. I lead on much of the research in the Discovery Hub at Nesta, the UK's innovation agency for social good. We design, test, and scale solutions to society's biggest problems. Our three missions are to help people live healthier lives, create a sustainable future where the economy works for both people and the planet, and give every child a fair start.
The Discovery Hub is responsible for helping bring the outside into the organisation by considering the consequences of external trends, and emerging technologies for Nesta's work. One interesting trend that's been receiving attention of late is planetary intelligence that James Bridle dives into in their book Ways of Being, which has recently come out in paperback.
James is a writer, artist, and technologist. Their artwork has been commissioned by galleries, and institutions, and exhibited worldwide on the internet. Their writing on literature, culture, and networks appeared in magazines and newspapers, including Wired, The Atlantic, The New Statesman, The Guardian, and The Financial Times. They're the author of A New Dark Age, published in 2018, and Ways of Being, which we're going to talk about, which was published in hardback in 2022. And they wrote and presented a show called New Ways of Seeing, for BBC Radio 4 in 2019. Welcome, James.
James Bridle: Hi, there. Thanks very much for having me.
Laurie Smith: Thank you very much for joining us. Now, before I start, I wanted to invite our audience to join the conversation in the comments box in the right-hand side of their screen to ask any questions throughout the event. Closed captions can be accessed via the LinkedIn livestream.
So let's start off. So I find your book really, really interesting. In fact, I've taken the advice from The Washington Post on the cover and actually read it twice. And perhaps we can start by asking what got you interested in the search for planetary intelligence.
James Bridle: So I've come from sort of multiple backgrounds. My academic background, a very long time ago now, was actually in artificial intelligence, in the last wave of that getting on for 20 years ago now. So I've sort of been vaguely interested in the subject for while, while also being somewhat cynical about it.
And in the subsequent time, my work's taken me to a few places through art and writing, mostly looking at the subject of technology, and also mostly looking at it again, with quite a sort of critical perspective. So less interested in the nuts and bolts of the technology, though understanding them as important, but really understanding their kind of social and political effects.
And in the last few years, like quite a lot of other people, I imagine, I've been trying to refocus my practise— what I do, what I'm interested in, what I talk to people about— around ecology and the environment, as kind of clearly something quite important we should be talking about more, and which is in real need of some new ideas and some new thinking.
I don't know whether I can provide that. But what I was really keen to do was see how I could come to this subject and bring something from what I already knew. So rather than just kind of diving straight into ecological questions to say, look, I've assembled a certain body of knowledge around critical thinking, around technology, how might this apply in this area. And one of a number of different breakthrough moments — because the way I work means these things tend to happen over and over again, usually quite a few times before I even notice what's happening — the universe will start banging on the door, but take you a while to get through — was this kind of — I really wanted to understand why people were so obsessed with artificial intelligence.
That's something that I'd thought about for a long time and still really struggle to get that excited about. Maybe we'll talk a bit more about that later. But I had this realisation that there's this fascinating moment we're in where on the one hand, there's all of this cultural interest in artificial intelligence and this machine intelligence, whatever it is.
And just at the moment when decades of researchers across the sciences and into non-Western scientific traditions are really starting to change our idea of what intelligence is, who possesses it, and the relationships we might have based on those things, and I really wanted to kind of try and bring those things together and that was one of the starting points for this work.
Laurie Smith: Really interesting. I've noticed in that you— you are starting with a theme of intelligence. I think you challenge the idea that human intelligence is unique. And there are many different ways of doing intelligence. What do you [INAUDIBLE] some examples?
James Bridle: I mean, I could pull in all kinds of things, really, but, I mean, if you look back— so I set out naively to write broadly about intelligence and to talk about it a little bit. And I quickly discovered that it's a very hard thing to define. And actually, when one sets out to define it— when any writer sets out to define it, any theorist, what you tend to end up with is these grab bags of various qualities.
So people might talk about things like complex thinking, planning ahead, having a sense of self tool use. There's so many different examples of things that are considered to be these markers of intelligence. And then depending on your subject, your interest, you might grab a few of them and put them together to say, well, that that's what says what intelligence is.
But what's most striking is throughout this literature, what ultimately this comes down to is intelligence is what humans do. It's always done from that perspective. And what that's done historically is radically limit our ability to acknowledge the intelligence of other beings because they do intelligence in all these different ways.
And the way I'm phrasing that is quite deliberate in that intelligence is not this static, fixed thing. It's changing all the time. It is something that is done, it's a verbal, as in, it's a verb. And so intelligence is performed in all these ways in relation to the world around us. It's meaningless to think of the intelligence possessed by a brain in a jar, for example, because they can't be anything there because there's no world to relate to, and thus, to do intelligence with, to think about and think with.
And as soon as you start to think about intelligence, therefore, as something that's about our relationship really with the world around us, it stops being either singularly human or something that only happens in the head. And really, the realisation I came to writing about in this book is, the main two terms I used to think about intelligence are intelligence being embodied and relational.
Embodiment is really key. It's why we tend to think of intelligence as something entirely human because, of course, we are embodied as humans and we have this body plan. I said before that this focus on human intelligence has led us to miss the intelligence of other beings in all kinds of ways.
My favourite example of that, which illustrates the embodiment thing quite well, is gibbons. And gibbons, for a long time, was subject to the kinds of experiments that we subject other creatures to in all kinds of ways that are supposed to determine their intelligence in this way. And according to this kind of bar that we've set that's like ours but maybe a bit lower.
A classic test is like tool use. You've got an animal in a cage, you put some treats outside the cage, you give the animal some kind of tool like a little stick lying on the floor, the animal uses this tool in some way to get the thing. And that seemed to work for a lot of the higher primates.
So orangutans, gorillas, small children. But also chimpanzees, macaques, some monkeys, as well as apes, they all seem to do this very well. Gibbons, for decades, refused to participate in this experiment. They just didn't seem to show any interest in it at all. And this doesn't make sense evolutionary because they sit between us. I have problems with that evolutionary view as well, but that's a slightly different issue.
But they should have been able to do this. We didn't understand why. And it took decades before someone rearranged the experiment, redesigned the experiment so that the pointy— the pointing sticks, instead of lying flat on the floor, were hanging from the top of the enclosure.
And immediately did that, then gibbons went, oh, oh— like this. And in that moment they became intelligence to us. Like they've been intelligent all along, but we were suddenly capable of seeing it because we'd acknowledged that they were embodied differently, which is that gibbons are brachiators.
They spend most of their lives swinging around in the trees, so they have a different body pattern, they have longer fingers which make it harder to pick things up from the ground, it's easier to pick things like this. But also, their awareness is pointed upwards.
So they have this intelligence they just do it differently because of their embodied nature. And from that flows this relationality, this fact that the intelligence is something that emerges out of our relationships with other beings rather than just being an entirely interior process.
And both of those, then, we go through many other examples, point to intelligence being something that is universal in my understanding, but done differently, and sometimes in ways that we can't even possibly comprehend by all kinds of creatures.
Laurie Smith: So if intelligence is this relational, dynamic, embodied phenomenon, where do you set the boundaries for what intelligence is? Or are there any?
James Bridle: So, I mean, what happens is, start to stop thinking about it in terms of boundaries, because once again, this is what we've always done. We've wanted to draw these kind of red lines between us and other creatures. And of course, historically, we've drawn that between humans quite extensively as well, like the extension of the recognition of intelligence consciousness and much else to non-human species started a long time ago when it wasn't even extended to most humans.
And we've gradually expanded our recognition of the differing abilities of all members of the human species alone before we start to push out further. And we're still doing that process.
What I think is incredibly interesting as part of this process is to stop trying to see it. To see intelligence or quite a lot of other things as being like these specialised domains of difference by which we can separate things and make them different from each other, but to kind of approach it essentially ecologically.
Ecology is based on relationships, on the way in which these things connect to one another, rather than the ways in which they split apart. That's why ecological ways of doing science have been so revolutionary, because historically, most of the Western scientific method has been about the kind of splitting and clumping process, breaking things apart, cutting it up into small pieces, separating them and trying to draw these like separate little diagrams of what all the little pieces do, and therefore, completely missing the whole ecology of what's actually going on.
That's slowly being repaired. And one of the ways in which we repair it is also to not treat qualities like intelligence as having like hard, defined boundaries that separate things, but are actually qualities that we can think about that might allow us to communicate better.
So my starting point— well, it wasn't my starting point. Actually, it's all it took me a long time to get there. But my starting point with this discussion now is that everything is intelligent. And I can include not just humans and non-human life in that, but things that we don't often include— consider as sentient like ecosystems, rocks, and the components of the universe.
Because if intelligence is relational, then it can exist between all kinds of bodies and beings. But the point is that you don't start from a position of going like, are you intelligent? You assume that intelligence and then you look for how it might animate and allow us to learn something about the world.
And fundamentally, you approach other beings. And instead of saying like, in what way are you like me? You ask, what is it like to be you? How is your experience creating a world? And then perhaps we can find some kind of overlaps between the worlds because all these multiple worlds that all different beings and organisms inhabit, where are the overlap points when we might actually be able to have some kind of communication or interest?
Laurie Smith: And so building on those overlaps and perhaps just your introduction as you are moving from artificial intelligence or expertise of digital— or knowledge of digital and artificial intelligence instead of ecological issues, you're solving a case about how those of artificial intelligence and other digital technologies can be used to help understand other forms of intelligence. How so?
James Bridle: I mean, to be clear, at the moment they're not particularly, and we can talk about the reasons why not. But I find it very striking that there's something that seems to have happened a few times in the history of human knowledge-making that have required us to essentially build almost like toy little versions of the world before we come into a kind of greater awareness of already existing.
So my go-to example of this is the relationship between artificial networks and biological networks as they exist out in the world. I'm sure many of the people here listening to this have heard of the extraordinary networks that we now understand underlie forests, for example.
Where you have these extraordinary systems in which trees of all kinds of species are linked together through mycorrhizal networks, fungal networks under the ground. And these networks connect trees of different species beyond their immediate kin. And they allow nutrients to be passed between them so that deciduous trees pass nutrients to the conifers when they're shading them in the summer, and the conifers return the favour in the winter.
But also, they pass information. If a tree is attacked by insects or something, the warning signals can be sent through this network so that other trees can start to realise like chemical defences before the insects get there. So it's this extraordinary, vibrant community of information and material-sharing that's going on within forests right under the ground. We were only recently becoming aware of it.
It's striking. When the first pieces of that research were being published in the 1980s and early '90s, there was a very famous issue of Nature, the scientific journal, that was published in which they published— they did a special issue about this and those discoveries.
And on the cover of the paper, on the cover of that issue of the journal, they called it a wood wide web. And this term that's now become quite famous for describing it. And the internet and these networks are not the same thing. They perform all kinds of things differently, but they have this metaphorical quality — really important metaphorical quality that's linked together.
And it's really striking to me that not only do they choose that as a name because they knew that people reading the journal had a mental model of networks from using computers that would help them to understand the forest networks, but also that the scientists who made the first— researchers' first understandings of the wood wide web were also, of course, some of the first people to be connected to the internet and given that metaphor because academic, scientific institutions were some of the first places to be connected to the internet.
And that association continued in all kinds of ways. When the internet was being very intensively studied in the 1990s as a technological object, it gave birth to a new mathematical discipline, a new topological discipline called network sciences, which is basically a new way to understand networks because the internet didn't function like previous networks.
It's what's called scale-free, which means it can grow and grow and grow and you can take bits out and put them back and it'll kind of root around damage and nodes can have different strengths and connections. And that was all new. We'd never seen a network like this before.
Except, of course, we had, and that was the underground tree networks, that despite not being that network, are also scale-free networks. And that we were able to analyse and understand better because we developed this mathematics for the internet.
So there's this weird parallel process going on. And to bring it back to your question, this is the reason, I think, this— this is, to me, the most interesting thing about AI, is that just perhaps we are so focused on intelligence in this form because it's necessary to our understanding of it as ultimately quite blinkered and selfish and unable to quite see beyond our own limits most of the time to construct this toy version of intelligence in order to parse out certain questions and understandings about it in order to see the rest of the world.
Because one of the most striking things about artificial intelligence is that— again, whatever it is in the moment, is that despite our science fictional assumption that artificial intelligence is just a human intelligence-plus, or artificial, general intelligence, whatever you want to call it, most of what we see in the world is actually radically unlike human intelligence.
It's like it's just a different kind of intelligence. There's a different way of doing intelligence, which brings us back to what we were saying earlier, which is that there are multiple ways of doing intelligence.
We haven't been paying enough attention to all the other ones, but even this one that we've created ourselves seems to be doing something different. And if there's human intelligence and machine intelligence and that those are different things, then there are more than two— there are more than one way and almost certainly more than two ways, and in fact, probably infinite ways of doing intelligence.
And so artificial intelligence, this entirely human creation, becomes like an opening by which we can start to pay attention to all the other intelligences that surround us.
Laurie Smith: And you mentioned about human and artificial intelligence— just two subcategories of intelligence. And Nesta's done lots of work on collective intelligence, which tries to bring the intelligence of machines and people together at scale. We've got a Centre for Collective Intelligence Design. What's your take on this idea of collective intelligence?
James Bridle: Yeah. I mean, again, it's part of or connected to the idea of multiple forms of doing intelligence. And intelligence can be collective or multiple in many ways and it just gets really interesting, because again, it's just like this different way of doing it.
So you can look at collective intelligence within individual beings, for example. You can look at the way octopuses do intelligence, which is where they have bundles of neurons. In fact, up to half of their what we would call brain tissue running throughout the whole bodies and that their limbs contain neurons that appear to allow those limbs to operate separately from the central nervous system.
That knowledge is they go off and do something over here and then they reconnect and vote or something. We're not entirely sure. We don't know what it's like to be an octopus, but they appear to be confederated intelligences within a single body.
Or you have the remarkable information processing capabilities of a swarm of bees, for example. The fact that we've known for a while that the bees share knowledge through the waggle dance, this little amazing wiggle-bumming little dance that they do to tell other bees of the hive where flowers are. So they'll fly out, find nectar, come back, and then they'll do this dance which tells the other bees how far away and what angle to the sun to fly in in order to go and get more of this nectar.
But they also do that as part of a voting democratic process where if they're looking for a new site in which to take the swarm a new nesting site, they'll go and find that and they'll come back and they'll dance their location. But other bees will be dancing their own locations as well. And so that they will gradually, sometimes over days, integrate this information.
In much the same way that we understand our brains to integrate information from of multiple sources. And this allows for very complex decision-making process to occur. Again, a form of collective intelligence, in this case, taking form between multiple members of a single species.
And yeah, so one, I think it gets really, really interesting is when, of course, you have that kind of cooperation happening across species and you have these radical different ways of thinking that might be working together. And we're seeing this in some applications of artificial intelligence.
For example— and again, I don't think this is the most interesting example, but it's a very handy way of explaining it. Human intelligence tends to be somewhat directed because in the sense that we can't help but have what are essentially hunches. We like— we have an idea of a subject and where a solution might be found to a problem and we tend to concentrate our resources on that area. And we're quite bad. If we think something's important over here, about looking over here, to see if there might be something else altogether.
Artificial intelligence in certain formations won't do that. It'll spiral out in a completely different search pattern of this kind of possible solution space. And so what happens is when you have those two things combined, really weird things happen. In the book I wrote about, something called the optometrist algorithm, which is used in nuclear fusion research, which is exactly this.
You have the incredibly complex set of parameters for an experimental fusion reactor in which essentially— not even an AI, just quite a complex algorithm. Like sets a bunch of possible parameters, runs it, and then shows a subset of those results to a human who follows this hunch-driven— it's simplified, but way of thinking. It's like, pick out which way to go next.
And so they explore the problem space together in a way that the human or the machine alone wouldn't do. And I think that has immense interesting possibilities for the future. One of the most interesting things happening in human governance at the moment for me is these things called citizens' assemblies. Citizens' assemblies, another really good example of collective intelligence.
Citizens" assemblies are— I'll try and keep this brief— forms of democratic assembly in which instead of people being voted in or selected for their particular set of expertise, people are randomly selected from across the population. Very famous and very interesting successful one of these has been running in Ireland for the last few years, but there's examples of them all over Europe, and increasingly, further afield now.
And what happened— I use the Irish example— is that 100 people were selected at random from the population. So from the whole of the voter rolls. And they were basically put in a big hotel for a bunch of weekends and asked to explore some of the most complex, knotty problems that the nation faced, things what to do about an ageing population, what to do about the ecological crisis, in Ireland particularly, what policy to take on abortion.
This was the actual the form of assembly that led to the referendum that overturned the abortion ban in Ireland. And what happened is that in almost every case in which these citizens' assemblies took place is that not only did like this 100 random people from all kinds of backgrounds come to like consensus around— Which is almost impossible thing for us to imagine in the times we live in, which is characterised by this division.
They would come to these consensus, and these stances would be radical beyond what politicians thought were possible. They would come up with new ideas and ways of implementing them.
And it's an example of collective intelligence that specifically comes from having many different ways of thinking about subject, because one of the ways in which citizens' assemblies differ from traditional assemblies is they're not voted in, so you don't have a political class doing this.
And they're not composed of people who are already as recognised as being experts in this, so you don't have this expert class, both of which may be useful in certain situations, but are not terribly good at coming up with the new ways of thinking about things that we're all very aware that we need.
What happens is you get people from a wide variety of different backgrounds and experiences. Think about a bit like people who are differently embodied, but towards their environment. And that produces radically new ideas.
And one thing that we desperately need to think about is, A, how we do more of that? It's more of this collective intelligence, more of bringing people of meaningfully diverse, radically diverse backgrounds together to think anew about subjects. But also, how we extend that beyond human intelligence? And we and we start to think about what role nonhuman intelligence has to play in those kind of decision-making and thinking processes as well.
Laurie Smith: And that would be a really interesting topic to dive into. What does it— if you have a world where we acknowledge non-human intelligence and engaged in politics, what might that look like. What are the practical ways that might manifest?
James Bridle: It's really hard to see at the moment as well, but there are all kinds of suggestions for this. Most of which do tend to involve essentially having someone kind of speaking for non-humans because that's the hard problem here. Bruno Latour's suggestion of the Parliament of Things, which is popular in environmental models for a very long time, the idea that essentially, you would have people representatives within a parliament or whatever whose job was to speak for the non-humans.
I think we're starting to try and imagine ways that might work better than that. But like an absolute foundation for it, before you get into the thorny questions of how do you speak to plants or animals in a more direct way, is actually how they are recognised and given standing and how are they given essentially equal rights.
Because that is the fundamental basis for this. Like we cannot hope to address the ecological crisis while we still see humans as being utterly superior to everything else on the planet because that's precisely the situation that has caused the situation we find ourselves in the moment, that we've considered humans to be superior, and thus, we can take and extract and do everything else we want to the planet.
And for me, the idea of political formations across species is an— practically, we still work out how to do it. Culturally of such foundational importance that we have to recognise, yeah, the status of respect that needs to be accorded to other beings and our interdependence. The fact that actually, the destruction of other species and of other ecologies is the destruction of our own ecologies and what we depend upon.
Laurie Smith: So this is really, really interesting talking to you. I'm now going to turn to some questions from our audience, some of which were posted in advance. We got one from Jesse Thompson. He says, there seems to be genuine enthusiasm and hope about the possibilities around interspecies internet, which I think is something you touched on the book, and potential positive impact of working with sensor data.
Do you still feel that way since writing the book? And are there any other initiatives or approaches you'd want to include that you have since learned about if you were writing it again?
James Bridle: Oh, that's a really good, meaty question. Yeah. So the Internet of Animals or the Interspecies Internet for those who aren't familiar with it, is essentially the idea that we should expand this kind of numinal network we call the internet to— in whatever form it takes to non-humans in various ways.
Let me give you an example of a really amazing Internet of Animals project that I really love, which came out of extensive work around animal-tracking. It's a project called the Icarus Project that comes out of the Max Planck Institutes in Germany. And what they did was they essentially— they attached little accelerometer sensors to a whole host of animals— cows, goats, sheep, pigs, dogs— I don't think pigs, actually. Anyway. And a couple of sites in Italy.
And these accelerometer tags, they stream data about the animal's activity via an antenna on the International Space Station back down to Earth where it's all recorded. And a couple of these sites were on the slopes of Mount Etna and in the village of L'Aquila to earthquake hotspots.
And they realised that by analysing the data of how active the animals were, they could predict earthquakes with more accuracy and further ahead than any other method we've developed. Which is astonishing and brilliant. And it's something that's been attested to in folk knowledge for a long time, that animals sort of behaved weirdly when we have earthquakes, but we didn't really have any way of making use of that knowledge. And now we do.
And crucially— there's several crucial things about it. One of which is like we don't have to understand the mechanism, and we still don't know how these animals have this sense how they're aware of it, even how they're even aware of having the sense if that makes sense.
But it doesn't matter. We can just listen to them and pay attention to what they're communicating and act on it without having to understand or break it down into this classically scientific way. We can trust them to tell us what we need to trust.
And this is being used, these kind of techniques, to understand animal migration better, for example, so that tags on large herds of deer and antelope in North America are used to make changes to planning law, for example. Where to build underpasses so that animals don't get killed on highways, this kind of thing.
And that, if you think about it, is a kind of involvement of nonhumans in the political process, because you're essentially giving them a kind of vote on our infrastructural systems. You're saying that their voices, at least their needs and desires, matter when it comes to how humans arrange our lives.
So I think there's this amazing possibility for things like the Internet of animals to play this kind of role. Though, of course, there's a bit of me that still remains deeply suspicious of the idea that just by sticking surveillance tags on everything, it's very easy to fall back into the illusion of control that we as humans have in these kind of situations.
That, oh, if only we knew more, if only we could gather more data, we'd somehow understand this situation and be able to totally control it, to dominate it, which is our original sin that's led us to this place. And I do raise those doubts a little bit in the book and I still have them.
And I guess what might have changed in my feelings about that is that I don't feel that's any less important. I guess I've just been strengthened in my beliefs and feelings about other ways of communicating with the world around us. That actually, we have a deeply innate ability to commune with the world that has largely been severed or at least very much suppressed by modern society, by capitalism, by the technologies that we use.
That it is possible to regain and re-access once again. And that it doesn't require all of this kind of technological apparatus. It just requires openings one's consciousness to the world a little bit more and paying a little bit more attention to it. And that's much more personal and doesn't belong within this kind of scientific discourse.
And it's so vitally important that we are capable of holding a scientific rationality framework for understanding the world on the one hand, while understanding that most of the world escapes that framing and that we need other ways of accessing it and thinking about it as well. And that what is required of us is this ability to modulate between these positions.
One which says that yes, we can take actions based on scientific frameworks and understanding of the world, but one in which it also acknowledges framings of the world that don't fit into that, particularly as non-Western scientific ones that the plurality of knowledge is an approach to the world that will contain all kinds of knowledge as well, obviously.
Laurie Smith: And what are some of those other ways— or the ones that particularly interested you or think would be particularly fruitful? There are other ways of understanding the world, that is.
James Bridle: So there's a scientist who I wrote about in the book called Monica Gagliano who's— very, very interesting work. And she's best known within the scientific community for a series of really fascinating experiments on plants, the most famous of which is her experiments with plant memory.
Where she took these mimosa plants, little plants that curl up when you touch them, so they're really good to see how quickly— to see that a reaction occurring. And she put them on a rail and she dropped them just like 10 centimetres, and so they curl up. Standard.
She then dropped them a few more times, and she noticed that they stopped curling up. And I was like, oh, OK. Did a few other things. Like you poke them, they still curled up. So they had somehow understood that this particular action, the fall wasn't dangerous and they'd stop curling up.
So they'd learned. They'd learned that this particular phenomenon was not of danger and had changed their response as a result. Not only that, she tested them weeks and months later and found the same. So they remember, they changed, they learned, they remembered over time. They did essentially a bunch of things that when humans do, we call it intelligent. And this is pot plants.
Which is quite a radical result that we don't fully understand. But keep doing the experiments. What confused or upset a lot of other people is that Galliano is also very vocal about having a shamanic practise. She talks about spending some months of the year in South America working with traditional knowledge-keepers, taking various kinds of substances in order to speak directly with plant spirits who have helped design her experiments.
And if you read her work, you really understand how she is working with these plants as collaborators rather than treating them as kind of experimental subjects. And in particular, again, even just phrasing it scientifically, it's revolutionary.
She talks about how historically, as I've said, the role of botany has been to chop up a plant into tiny, tiny pieces and regard it as a tiny machine rather than seeing it as a whole organism, rather than seeing it as like an animal, something that has behaviours, that has experiences that might be a being. And when you start to do that, all of this other kind of understanding occurs.
And the thing is, if you don't have that kind of— if you don't like the shamanic aspect of it, it doesn't matter because the way she designed two experiments so well that they work within the scientific framework, the peer-reviewed, reproducible experiments published in journals and so on and so forth. You can understand these as different ways of understanding the world that get us to not the same, but similar places and that can both exist in the world at the same time.
I struggle with this, too. I mean, I've taken ayahuasca, one of the substances that Galliano talks about, and I have spoken with plant spirits, and I have no doubt about their existence. And yet, I'm so deeply conditioned into Western ways of thinking to myself that I second-guess and struggle with that. And struggle to articulate it even within the kind of discussion that we're having because the discussion that we're having right now is still framed so tightly within these kind of scientific terms of explanation, even while I'm critiquing it.
And I have no trouble, despite my difficulty explaining how of thinking of intelligence as something utterly universal, because I have had direct experience of that. And I communicate through various techniques of meditation and fasting, hiking up in the mountains for days with non-human beings.
That almost doesn't fit within human language, let alone human knowledge systems as I understand them, but they're millennia old and they exist across cultures, and they don't actually conflict with things that we learn otherwise if those things are helpful and useful to us, which, most of the time, should be the criteria in which we base our interest.
Laurie Smith: And that leads us to a question from Jacqueline Bagnell, which speaks some of what you're saying, and also, to a question I raised earlier, where they say, if we don't note boundaries between intelligence, where do qualifications fit? The education industry based on measuring intelligence. What's your take on that? I imagine it's much, much broader, but I'm interested to hear.
James Bridle: Yeah. I mean, I think qualifications are important for skills-based things. Like, if I'm about to start building a house and I'm learning to build at the present moment because I'm going to build it myself, it's a self-build project, but I'm not going to be so stupid as do the wiring entirely myself. I don't want the house to burn down. I think there's important knowledges when working within certain frameworks that matter.
But I do feel very strongly that the education system that most of us experience at present, which is based on those qualifications and testings, is completely bunk and totally unfit for purpose and should not exist in anything like the form it does at the moment.
Where we have been— I was educated within it, we're still educating within it, we've been doing it for generations, forms of education that seek to instil a certain paradigm of the world into people and then test them on how well they can regurgitate that paradigm rather than teaching them to think or learn or understand the things that will actually be necessary.
My broad understanding— and this [INAUDIBLE] educational theory I don't think [INAUDIBLE] just about too much, is that the purpose of education is to help us to learn the things that we need to learn when we encounter them, not to try and predispose people to any particular system or to fit in with any external idea of what's valuable, particularly what we have at the moment, which is this education system based around getting people jobs.
It's a larger discussion. But no, like very specifically, intelligence testing is completely rubbish because all it does is it picks one model of intelligence of which there are multiple and uses that as the framework for everything. But I think you can extend that metaphor to the larger system of education based on grading sets of knowledge rather than teaching critical ways of thinking and understanding the world.
Laurie Smith: And moving to a slightly different topic, there's a few questions here from Robin Bergman which seem interrelated. One is about, do you think intelligence is Darwinian? But also, there's, what is your take on evolutionary affordances compared to intelligence? And another one on evolutionary affordances.
James Bridle: Not entirely sure what that means, but I'll try and give it one answer, which is that one of my favourite writers is the anthropologist and later ecologist Gregory Bateson. And he has a very interesting take on intelligence and its relationship to Darwinian theory and history.
He views Darwin as actually being— having done this incredible damage to our notions of thinking and relating. Because when Darwin framed evolutionary theory— and this has been propounded by the worst way by a lot of his followers and the neo-Darwinian kind of movements.
He framed the individual, whether that's the individual being or the individual species, as being the unit of survival within the framework of evolution. So that we see— we draw these kind of tight lines down family trees or through this species tree of like these totally separate, remote strands of evolution.
And Bateson says this is nonsense because the unit of survival is not the individual, it's the it's the ecology. And the unit of survival is— it's man plus environment, human plus context. The unit of survival is an entangled thing that contains us and other beings as well. So it's completely ridiculous to talk about evolution as being this fight between different species or between different individuals because what you actually have is complex interrelationships between different ecologies that survive or die together.
And so— but what Bateson's also extraordinary move that he makes at that point is to connect that to thinking as well. That thinking in his reading is something that happens across species boundaries as well, like what I talk about when I talk about embodied in relational thinking.
He says what thinks is the human plus the environment as well. That we are— and so there is this very close relationship between ecology as a healthy bodily relationship involving nervous systems and immune systems and this very permeable boundary that actually exists between us and everything around us and our capacity to think in terms of networks. That's shorn of ecological networks, we're also incapable of thinking at all.
And that has— and again, that's mirrored back in new understanding of evolution and things like horizontal gene transfer, which also completely muddle all of these lines that we draw between ourselves. That, in fact, doesn't even really appear to be such a thing as species. That appears to have been misnamed.
So all of these distinctions, all of these lines are falling, and the thinking that becomes possible as they do is, to me, what's interesting.
Laurie Smith: We've got a question here from Marcus Winter. And so he says, you mentioned in the introduction, intelligence is embodied. How does that square with AI as just another form of intelligence? Presumably, it not being embodied in the sense that you or I might be.
James Bridle: Yeah. It's a really critical way of thinking about it. One of the things I talk about in the book is what I call corporate intelligence. And what I mean by corporate intelligence is— well, really, the kind of forms of AI we have at the moment, but also the full understanding that those forms of AI aren't just these computer programmes we hear about all the time.
Another way to think of an artificial intelligence is a corporation. A corporation, a large business enterprise is a collective intelligence composed both of those who— man— people are employed by, I should say, that company, but it has— a corporation has sensors and effectors. It can affect the world in various ways, it can sense the world in various ways.
It has— it's sensitive to profit and loss, it changes its response in different ways. It has legal standing. You can sue a corporation, a corporation can sue other people. It has it has free speech— corporate speech is protected. So you can understand a corporation as a kind of organism, a nonhuman organism. An artificial organism, and therefore, is an artificial intelligence.
So you can think of corporations as being AIs, and I think that's quite a useful thing to do, particularly when trying to understand how to kill them. But you can also, therefore, think of try to understand quite a lot of the AI that we have at the moment as being a corporate AI because it is embodied.
These intelligences are embodied in a certain way. It's just their bodies are computer networks and the terminals that we use to interface with them and the environment in which they grow up, which is largely the tech companies, which have a very specific culture and a very particular view on the world. They have a libertarian flavour to them because they've grown up in a libertarian world. That's their evolutionary niche.
So of course, they're embodied radically differently to us and radically differently to almost all life on this planet, but they are, in a sense, embodied. They have an environment, they're part of an ecology. And that environment and ecology determines how they think and act.
And actually, so thinking about them from that perspective explains quite a lot about them and how to think about them. It starts to be clear why so much of the framing around AI is so deeply competitive, for example. For a long time, all the AI we heard about was all these AIs that would beat us at chess and then beat us a Go, and now they're trying to beat us up like art and writing movie scripts.
And it's in part because that particular form of artificial intelligence— and again, it's just one particular way artificial intelligence might manifest, emerges from a financialized capital world that sees being intelligent as winning at all costs as a factor of profit and loss and dominating the competition.
And if you look a little bit beyond that, you can start to see what AI might look like if it didn't emerge from such a narrow idea of what intelligence is. You have initiatives like Indigenous AI, you have kind of various other artist-created things that are doing more interesting approaches for these that don't embody those things, but the embodiment is always there.
Laurie Smith: And we've got— talking of corporations, you've got another question here again from Jacqueline, and says, citizens' assemblies will be a valuable way to explore the ethicality of AI application. Would this be a viable form of corporate governance?
James Bridle: Yeah. I mean, there's been a call in this— all these recent questions about AI ethics for citizen assemblies to be deployed. I really hope the discourse around citizen assemblies doesn't get captured by this AI question in some way, but I would say that citizens' assemblies are a very good way of approaching all kinds of problems and I would support their use.
But I don't think the AI ethics question is necessarily the most important one we should all be concentrating on right now, or at least it could be framed slightly more broadly and perhaps the citizens' assembly would do that.
I think it's— the thing, I guess, I didn't say about the citizens' assemblies earlier that I think is really important, that people come out of them differently. When you go through a citizens' assembly process, part of that is experts are involved, but in a like educative position where they speak to the participants and help them understand some of the issues without determining the outcomes.
But that means that by participating these kind of democratic processes, people increase that agency in response to the complex problems that we face, as well as coming up with the kind of solutions we have.
And I think what's really important to think about these processes is that not think of any of them, including anything I'm talking about about a better education process or a better way of relating to the world, unless we think of it as an ongoing process out of which different possible worlds emerge. And we don't know quite what those look like yet, but it absolutely requires us to enter these processes with the expectation that we will be changed and perhaps troubled in interesting ways by the outcomes.
Laurie Smith: And you're obviously sceptical of a corporate or capitalist approach well. We've got a question here from Ignacio Gutierrez Gomez— I hope I'm pronouncing that correctly. It says, I was wondering how we can use these other forms of knowledge, nonhuman, without making the commodity or financialized, and not transforming it only as a way of economic growth? Is that possible?
James Bridle: Yeah. I mean, I think it's absolutely possible, and it just requires coming from a totally different starting point. One of the examples I mentioned earlier of Indigenous AI is a system being developed by— largely by Maori speakers in New Zealand for looking at for developing translation algorithms using machine learning that remain within the community. That is banned from being sold to the outside so that the benefits of that only accrue to the community.
And I know that's not a non-human example that you asked for, but the reason I bring it up is that these things will— non-financial uses of these things will emerge when they are made by people in communities that are not interested in profiting from them in this way, which is why the first step towards them is kind of diversifying knowledge of how— of this out— that sounds so like paternalistic.
The knowledge exists already out there and in vast swathes of people. And it's about listening to those forms of knowledge and not presuming that there is something that we have to build upon them.
So it sounds a bit confused, but I really— I don't want to speak for communities of knowledge that are not my background or my history in particular, but if the simple answer is there other ways of being in this world that don't involve substitution by capital, then yes, absolutely they exist. And like everything else, they're just kind of waiting to be given strength and attention.
Laurie Smith: We've got another question here from Kathy Peach. And she's asking, how would you take the insights from your research and apply it to making governments more intelligent?
James Bridle: Well, I'm not really one for governance myself very much, so it's a difficult question. As I said, I look at citizens' assemblies, the main way governments become more intelligent is through collective intelligence, diversity of intelligence, and devolving power to those who are considered to be governed, but not really given a choice in their government.
Intelligence is never going to reside within single entities in the way that current systems of government presumes that it does. And therefore, the more ways in which decision-making power can be devolved to more diverse and more numerous collectives, the better, really. And that's— I mean, that's attested to by plenty of social sciences research, including all the stuff that's gone into citizens' assemblies and other systems.
Yeah. Yeah, that's what I have to say about that, I think. There are pathways towards that. And some of them are top-down, like the establishment of citizens' assemblies, and a lot— and more— they're really bottom-up. You don't change a government by changing the government. You change the government by changing the culture that produces that government and is happy or not with it, but both directions help.
Laurie Smith: I think we've got time to fit in one final question. This is from Ike Ovuworle. And this has received a lot of certain media attention. I'd be interested in your take on it. Do you think that artificial intelligence will overthrow human intelligence at some point? I suppose the question for me, is that the right way of thinking about it?
James Bridle: Yeah, it comes back to the question of what do we mean by artificial intelligence? And there's a few things to understand there, I think, for me. One of which is that what we currently call artificial intelligence at the present, like actually existing artificial intelligence, is nothing of the kind. It's just like some really powerful computers.
And as I said before, it's powerful computers of one particular kind. When I studied AI 20 years ago now— badly— we were studying much the same techniques, these things called neural networks that go back much further to like connection and wisdom of the 1960s that are still the main technique that underlies AI models today.
The difference— the thing that's actually happened in the last 20 years is not— it's not some major breakthrough in how we think about AI or what we imagine intelligence can do. What has happened is that large technology companies have spent 20 years building incredibly powerful computers, vast sums of money as a result based on taking like all of the data of human culture— our own lives, our knowledge, as our history, our sciences— and ramming them into these machines that are now kind of selling something back to us.
That's not artificial intelligence. That's what the science fiction writer Ted— oh, whatever Ted's surname is— in a recent piece called— like it's applied statistics. It's not intelligent. It's just— it's power. And it's primarily capitalist power and it's primarily money power.
And so pretty much any headline that says, is AI going to do this? You just replace AI with capitalism, because there's no problem of AI in the present moment that isn't a problem of capitalism. If you're worried about take your jobs, if you're worried about subsuming us in various ways, well it's just doing that because there's money in it and that's the system that we're in.
I don't think there's a meaningful either to the idea— the science fictional idea, that it's nonetheless useful to think about about artificial/general intelligence somehow supplanting us. We are part of an economy— well, we are part of an economy, but what I meant to say was we're part of an ecology of intelligences in which ecologies— in which intelligences work together, think together— sometimes think against each other, but in which we are all deeply, deeply entangled.
And the very idea that one might come to dominate us is based on the deeply false idea that the universe is based on domination. That somehow there is any kind of hierarchy to us, and it's specifically, it's based on the historic idea that humans are smarter, better than everything else.
We're not. We have certain affordances and a birthright and other things that have allowed us to dominate this planet. And that won't continue very long if we continue with that belief because we are destroying the ecology on which we utterly depend.
That's the opposite of intelligence. That's the dumbest thing we could possibly do. It proves how unintelligent we are that we would destroy the ecosystem on which our lives depend. There is no hierarchy of intelligence. Nothing will come to dominate us. We might, however, destroy much of the world before we come to acknowledge that, and to be honest, we don't need AI's help in doing that.
Laurie Smith: Well, I totally agree. And thank you so much for a really, really interesting discussion. It's provided a lot of food for thought. And I hope everyone in the audience has found it useful as well. Understand that James' book is in all major bookstores, so if you want to find it, then I can heartily recommend it. I've got a copy here. It's blurring, [INAUDIBLE] so I won't show that.
So now that we've reached the end of the event, I'd be really grateful if those joining the audience could please fill in a short survey. The link will be shared in the chat and also be available in the event's description. As a thank you for filling out the survey, you'll be entered into a prize draw for a 50-pound bookshop.org voucher.
And if you haven't already, please sign up to Nesta's newsletter, and we'll let you know about other events coming up. So what all needs me to do is say thank you so much to James for an absolutely fascinating discussion. To be honest, I wish I could talk to you all afternoon, but I suspect you don't have time, and I don't know if our audience is either, but thank you very much indeed, and thank you, everyone else, for joining us.
James Bridle: Thank you very much. Cheers.
The opinions expressed in this event recording are those of the speaker. For more information, view our full statement on external contributors.
He/Him
Laurie leads on strategic foresight for Nesta. He oversees much of the organisation's research into emerging trends, novel technologies and promising interventions. Prior to joining Nesta he worked at the Royal Society, the UK's national academy of science, where he most recently led on emerging technologies and futures. Previously he worked at the Academy of Medical Sciences on policy around medical science, public health and international health.