The last few years have seen massive hype and massive paranoia over Artificial Intelligence. While AI continues to progress rapidly, the hype is beginning to subside. There’s greater realism about the speed of advance of fields like driverless cars, and about just how long human drivers are likely to be needed alongside algorithms. The fears are also being modulated too, as it’s recognised that AI will more often reshape jobs rather than replacing them entirely.
Explore the full report
This more nuanced position is also opening up important new thinking on the relationship between AI and Collective Intelligence: specifically, interest is turning to how AI can help large groups to think together rather than providing an alternative to them. Over the last four months, we’ve started mapping out this new field - what AI-enabled Collective Intelligence (CI) looks like at the moment, where the main opportunities for innovation are, which challenges to be aware of and how these could be managed. In this blog, which provides the background to a panel discussion on the AI/CI relationship at our 2019 Collective Intelligence Conference, we explore some early lessons from our research.
There are very few fields where AI on its own is likely to be able to solve complex problems. But AI can be useful in providing inputs - for example, making predictions about likely patterns of climate change. And AI can help large groups to deliberate, think and decide more effectively, increasing both the depth and breadth of human collective intelligence.
This emerging field - which has had much less attention and funding than the use of AI in prediction and analysis -is our main focus here. As we show AI can help groups think more sharply - whether charities, companies, parliaments or cities - and it could help solve the problems of scale that often impede efforts at large scale deliberation and democracy.
The piece summarises some of the promising examples and shows where they could be heading, from humanitarian aid to science, and shows the future potential for other areas of CI such as digital democracy. It shows that the field is now going beyond the limitations of a previous generation of work on human-computer interfaces and design that focused primarily on how individuals could interact with AI (eg doctors, lawyers or teachers) focusing instead on groups.
While presenting the opportunities for AI to augment and amplify collective intelligence (CI) - particularly the issues of handling scale - it also sets early lessons on some of the design tensions that anyone experimenting with combinations of AI and CI needs to be aware of.
AI today is much more dependent on people and crowdsourcing than is often appreciated. A huge amount of hidden collective labour contributes to the latest advances in AI, and specifically, machine learning. Supervised approaches to machine learning rely on large labelled datasets as their training material. Whether these activities are compensated through crowd-work platforms such as Amazon Mechanical Turk and FigureEight or obtained through internet traffic with tools such as reCAPTCHA, they often remain the untold story behind the success of AI. Flipping this on its head, many crowd-labelling collective intelligence initiatives have thus naturally found themselves in possession of vast datasets that are suitable training material for AI tools. But this is just one of the many types of interactions that are possible between AI and the crowd.
In Kenya and Uganda, more than 1 million farmers are members of the WeFarm platform which allows small-scale farmers to ask each other questions on anything related to agriculture and then receive crowdsourced bespoke content and ideas from other farmers around the world within minutes. To deal with the more than 40,000 daily questions and answers the network uses Natural Language Processing based on machine learning to best match questions to the responders within the network.
WeFarm and the other examples in this feature are good illustrations of how AI can enable CI.
At its simplest collective intelligence can be understood as the enhanced capacity that is created when people work together, often with the help of technology, to mobilise a wider range of information, ideas and insights. Within this CI-based approaches provide new approaches and opportunities in four ways through increasing our ability to:
Our playbook for collective intelligence design explores the wider opportunities in these four categories of collective intelligence in more detail.
A common challenge in tech-enabled CI-based solutions is scale and making sense of the different people, ideas, contributions and different types of data within a network. Especially as many solutions rely on multiple contributions of data from different sources, such as combining satellite, weather station and citizen-generated data to better understand changes to the environment. While this is primarily a software engineering challenge, once these data are cleaned and organised they can provide a rich input for current “data-hungry” AI methods such as natural language processing, computer vision and speech and audio processing.
Recent advances in these methods could play a significant role in enhancing CI by increasing the efficiency and scale of data processing, making more accurate predictions about future events or identifying new patterns and relationships between datasets. This combination of CI and AI can help facilitate more timely reactions and decision making as well as a more nuanced understanding of the complex dynamic of situations and how they change in real-time. The table below summarises some of our early thoughts on the mapping between CI challenges and how AI might be used to overcome them.
A table describing mapping between CI challenges and how AI might be used to overcome them. Download this table as a PDF
At present, there is no established framework for understanding the interaction between AI and CI. Our efforts to map existing practice and academic research have indicated at least five ways we can begin to understand this relationship, which we describe below (we take a lot of inspiration from this great resource by Rai et al on digital labour platforms). Although these categories of interaction will undoubtedly grow as the field evolves, we hope that they can act as a useful starting point for those interested in exploring the current AI-enabled CI landscape and future opportunities.
In this form of interaction people knowingly interact with each other and the AI but take turns to solve a task. This is done either by combining different capabilities of human and machine intelligence or through a system of feedback loops between the crowd and AI (to allow for continuous improvement of the system).
One example of this is the Early Warning Project which uses both crowd forecasting and statistical modelling to generate predictions about the risk of mass atrocities worldwide. In combination, the methods offer complementary insights and counterbalance each other’s weaknesses.
In this second type of interaction networks of humans and sensors passively generate or actively collect data that is used as the input for a machine learning algorithm. Insights and lessons from this analysis are then utilised by the wider community of users of the platform to generate new knowledge.
This interaction also sometimes makes use of crowd-microtasking to generate labelled datasets as a training input for supervised ML models or use unsupervised AI methods to produce structured, organised insights.
Examples of this category of AI-CI interaction range from large-scale projects with active participation from users such as various projects on Zooniverse and MapwithAI to projects like OneSoil where the project makes sense of passively gathered sensor data.
Instead of taking turns, this form of collaboration happens in real time where AI and humans both contribute to the same task at the same time. The generative design software for collaborative design developed by Autodesk is one example of this. In this case the AI gives designers and other users real time suggestions for different possible permutations of a solution and design alternatives based on the parameters that it is given. If the designer changes the parameters, by for example changing the width of a room, the AI will generate a new set of alternative design options.
AI can also play a vital role in enabling more efficient and streamlined collective intelligence projects by helping people better navigate lots of different kinds of information and tasks.
In this type of interaction, AI is used for back-end functionanility to improve the experience of individuals on online platforms. This can be achieved in many different ways, for example by better matching with others who have common interests, optimising search functions (SyrianArchive) or optimising training processes and task assignment for human contributors to citizen science projects (GravitySpy on Zooniverse). We see this type of AI contribution as “greasing the wheels” of a CI process.
Finally CI initiatives can be used to support the collaborative or competitive development of AI tools and use crowd contributions to ensure that these are better and fairer.
Example of this include online challenges that focus on AI development to solve a challenge such as the DeepFakesDetection Challenge and MalmoCollaborativeAI which is specifically set up as a game to reward the development of more collaborative AI. Another example is augmenting AI by providing more diverse data for training of AI which can turn lead to the development of AI tools that are more aligned with the public interest. Mozilla's Commonvoice project , for example, is creating an entirely new dataset driven by crowdsourcing voice contribution and validation to create an audio ML model that is both more transparent and representative of the population.
While there are many opportunities, the combination of AI and CI also brings with it a number of design tensions. We offer a preview of some of the main trade-offs below, with particular attention to the impact that AI could have on motivation to participate, crowd dynamics and responsibilities in high-stakes contexts.
Several examples of AI integration in CI projects perform tasks that may previously have been performed by volunteers. The risk of dis-incentivising the volunteers, by making the tasks too hard or making tasks monotonous, has been highlighted in particular by citizen science projects such as the Zooniverse platform and Cochrane Crowd. Apart from the risk of damaging relationships with volunteers who have dedicated large amounts of time (sometimes years) to helping these projects, there is also the potential loss of auxiliary social impact e.g. science education in the case of citizen science, through increased automation of microtasks.
Algorithms typically optimise for accuracy and/or speed. In the context of CI prioritising for these characteristics may not always be as relevant. For example, when citizens are brought together to discuss contentious or complex issues, as is the case in digital or deliberative democracy initiatives, you may choose to optimise for transparency and inclusiveness, which may actually slow down the process.
In AI and software development in both industry and academia, an obsession with optimising tools can distract from other drivers that should be weighted equally as optimisation criteria. For example, iterative incremental improvements to the accuracy of the algorithms can be among the primary outcome measures that drives investment and allocation of resources. However, within CI projects the AI can be used alongside significant human contributions, for example with additional verification by experts and the crowd. For CI projects the priority will therefore often be to get a working AI tool with “high enough” accuracy that enables a tangible improvement to scaling issues rather than needing to invest extra resources in ongoing refinement of a tool for an extra 1-2% accuracy.
Some of the most cutting-edge AI methods are currently being developed and tested in closed lab settings or in industry contexts where interpretability is less of a priority than methodological advances and datasets can lack real-world messiness.
One of the most promising techniques at the AI research frontier, deep learning, is often criticised for its evasion of interrogation. The scale of deployment typically seen in real-world CI initiatives increases the potential risk when things go wrong, which can happen when methods from the lab are brought into messy real-world contexts. This places a higher burden of responsibility on CI project leads to ensure that the AI tools used within them are well understood. Our research has highlighted cases where deep learning methods were trialled and discarded in favour of interpretable classical ML approaches in order to meet the stricter accountability norms imposed by working with the public sector. This risk aversion is understandable but might also be preventing more imaginative exploration of AI-CI interaction.
Some of the most common failures of successful AI integration stem from forgetting to adequately consider ongoing human interactions and group behaviour when deploying AI tools. Examples of initiatives that didn’t work as intended include Google Flu Trends, which was hailed as a success of search-query-scraping before it emerged that it was vulnerable to overfitting and changes in search behaviour. Citizen science efforts that focus entirely on building an automated tool can fail to consider volunteer needs such as adequate training on tasks, which in return yields poor data quality insufficient for training AI. Even well intentioned projects such as the “Detox” tool developed by Google Jigsaw and Wikipedia that used crowdsourcing in combination with machine learning to identify toxic comments may only be effective for short periods until “bad actors” figure out how to counteract them. This vulnerability to gaming is a common feature of automated methods when they are not updated frequently enough to remain sensitive to a shifting context.
Finally, current “data-hungry” AI methods rely on large amounts of clean, machine-readable data. Even in public sector contexts where such data exists, deployment of AI encounters long delays if there are many stakeholders involved in negotiating the data sharing that is necessary. For example, the New York City Fire Department, which has long promised an enhanced AI-enabled version (Firecast 3.0) of its model for predicting fire risks has faced many difficulties due to organisational culture.
Between the £1 billion allocated in the 2018 industrial strategy and the more than £800 Million raised by AI companies in the first half of 2019 AI remain one of the most well-funded areas for technology research and development. However, in spite of this little funding is going towards CI opportunities.
At Nesta, we have been exploring how this could be done. In April 2019 we announced our first 12 grants for collective intelligence experiments. Earlier this month we, in partnership with our co-funders Wellcome Trust, Cloudera Foundation and Omidyar Network, launched our second fund making an additional £500,000 available to organisations interested in experimenting with AI/CI based solutions.
Others have also begun developing a research agenda in this space, with a growing interest from academia in exploring some of the more interesting and imaginative interactions between humans and machines. Examples of this include GovLab’s recent study on identifying citizens’ needs by combining AI and CI , the work of the MIT Centre for CI and studies of the impact of using machine learning on the Zooniverse citizen science platform. However, amongst the billions spent on AI in the UK and internationally, studies like these are a rarity.
Explore 20 case studies that bring together AI and collective intelligence
We have just started exploring this area and will be publishing more on this project in the coming months as we continue our research. If you’re interested in finding out more, would like to share comments or are interested in collaborating with us, we’d love to hear from you! Please comment below or contact Aleks Berditchevskaia.
A big thank you to Geoff Mulgan, Kostas Stathoulopoulos and Jack Orlick for their contributions, feedback and questions, all of which made this blog infinitely better.