What is artificial intelligence?
The term ‘artificial intelligence’ has undergone numerous evolutions in meaning since it was first coined by John McCarthy in the 1950s. More recently, the following definition by Russell and Norvig has gained traction in the research community, for its ability to capture the variety of different AI methods and problem domains: intelligent agents that receive percepts from the environment and take actions that affect that environment.
This definition includes everything from the back-end algorithms that power Google’s search engines and Netflix’s recommender systems to AI-powered hardware systems like robots and autonomous vehicles. The actions or tasks that these AI agents perform are typically considered to require human (or ‘natural’) intelligence. Common everyday uses of AI include perception of audio and visual cues by personal assistants like Alexa, automated translation between languages by Google Translate and routing apps that optimise navigation in cities, such as Citymapper and Waze.
Over the last 100 years, AI research has been defined by several competing schools of thought, which differ in their assumptions of what ingredients are needed to create an intelligent machine. The two dominant paradigms are Symbolic and Statistical (see Figure 1). Symbolic methods were very popular in the early days of AI and assumed that AI could be programmed by predefining a set of rules for a computational system to follow.
However, since the 90s and 00s, when hardware advances started to amplify machine capabilities by increasing data storage and the speed with which algorithms could carry out computations, statistical methods have come to dominate the field. These methods extract relevant rules and knowledge based on many examples from a specific problem domain. A final class of methods – known as Embodied Intelligence – sits outside both of these paradigms. These methods assume that higher intelligence requires a body or the ability to act in the external world, for example Robotics. While the figure below is not exhaustive, it illustrates the main methods currently being used at the intersection of AI & CI.
Figure 1
Machines that learn from data
Most applications that use AI outside of research labs today are based on machine-learning algorithms, which refers to the broad category of algorithmic methods that improve their performance of a task based on experience and data. Machine-learning algorithms are statistical, which means they typically rely on extracting patterns from very large training datasets to achieve the necessary quality of output before they can be deployed in real-world contexts.
Machine-learning techniques optimise their performance based on different learning paradigms. A lot of CI projects that use AI, use supervised learning. This is where an algorithm learns to make predictions by looking at many examples of data that has already been labelled by people. For example, some citizen science projects ask participants to assign labels to images of animals, galaxies or to transcribe scanned documents. This information can be used as a training dataset for machines to learn how to classify similar images that are unlabelled.
Another common learning paradigm is unsupervised learning. It is used for large complex datasets that do not have labels. Unsupervised learning helps to identify common features of the data in order to make it easier to understand. For example, when citizens are invited to submit ideas for policy interventions or public funds, the resulting dataset can be difficult for officials to process because it contains thousands of ideas that are all written in different styles. In this case, unsupervised techniques may be used to simplify the data by assigning ideas into broader categories, like commonly occurring themes.
Machine-learning is a particularly good fit for CI projects, many of which gather or interpret large datasets. The data is often human-generated content like images and videos, actively crowdsourced through smartphone apps and online platforms or scraped from social media or other public channels, such as the radio, where members of the public are passive contributors.