The work of the Centre for Collective Intelligence Design (CCID) at Nesta rests upon the premise that collective human intelligence combined with machine intelligence is more powerful than either in isolation. In September 2020, the Centre launched the third round of the Collective Intelligence Grants programme. We provided grants of up to £30,000 to experiments exploring and testing this idea in new ways to help create more equitable and sustainable futures.
Between April 2021 and February 2022, three teams conducted three experiments.
Conducted by DigVentures in partnership with ArchAI and Brightwater Landscape Partnership.
What was the experiment about?
This experiment explored a new collective intelligence approach to identify previously unknown archaeological and historic features. It tested whether crowd-based labelling of Geographic Information System (GIS) data and remote sensing (LiDAR) data can be used to train an AI that can identify sites of archeological interest. The experiment took place in the Brightwater Landscape in County Durham.
What did they learn?
The crowd of 100 participants found a staggering 3,670 archeological features, which was a 60% uplift compared to the original archaeological data records. In addition, the crowd labelled data increased the accuracy of the original data set from 88% to 94%. However, using the crowd data to train an AI proved more challenging and for most types of archaeological sites it didn’t work. Importantly, the project increased participants’ connection to the landscape – 75% of participants said taking part had increased their sense of place and connection to the Brightwater area.
Conducted by University of Manchester in partnership with EMission and Wythenshawe Community Housing Group: Real Food Wythenshawe Project.
What was the experiment about?
This experiment tested whether a natural language processing (NLP) model trained by a crowd can be used to calculate the carbon footprint (or “carbon forkprint”) of online recipes and go on to suggest lower-carbon ingredient substitutions. The experiment also tested whether presenting people with lower-carbon substitutes leads to lower-carbon cooking.
What did they learn?
The team successfully built a machine learning algorithm to automate the “forkprint” estimation of online recipes by analysing 587 recipes that had been submitted by 130 participants. They then built a bespoke browser extension to display the forkprint and generate low-carbon ingredient swaps for any online recipe. When the browser extension was tested with the cohort there was an average of 3.9 kilos of CO2 saved per participant.
What was the experiment about?
This experiment sought to understand how chart design influences perceptions of trustworthiness and readability, and whether it’s possible to train a machine learning algorithm to itself understand the factors that make a chart readable or trustworthy. The experiment also explored whether AI technology or other humans “verifying” the charts would make a difference to trust and confidence in popular chart types. The experiment involved 12,179 crowdworkers across three crowdsourcing platforms.
What did they learn?
Several of the factors – such as chart type and colour, source citations and error bars – had statistically significant impacts on readability and trust. However, these results came with a considerable caveat: the findings weren’t consistent or replicable when compared across all three crowdsourcing platforms. The experiment highlights significant challenges around the replicability of tasks across crowdsourcing platforms, particularly as these are increasingly being used for developing AI.