About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

HAL's revenge: the computer as 21st century reviewer

In the 21st century we will increasingly see computers assessing art.

Art is, of course, about emotions, society, things it’s impolite to talk to strangers about (sex, politics and religion), etc. Ostensible meanings aside, very sophisticated interpretations are possible and there is the massive literature of criticism devoted to this. Art’s shapeshifting, eternally changing, nature (installation art, post-modernism, post-internet, who knows what next) is a challenging moving target - even for humans. Nevertheless, there are reasons to think that computers will be increasingly influential in assessing it.

Firstly, any protest that it is inconceivable that computational assessments of work’s quality can be made, and affect our behaviour, is ignoring overwhelming evidence to the contrary. This is, in a general sense, happening every day when we search the internet, and is influencing our behaviour far more than any single critic past or present. Part of what is known to underlie internet search is the page rank algorithm developed by google founders Sergei Brin and Larry Page. This decides which pages are ‘best’ to return based on the mathematical structure of the network of hyperlinks between webpages.[1] In addition, things are routinely recommended to us on sites like Amazon and Netflix on the basis that there is statistical evidence we might like them. These computer algorithms are, admittedly, building on valuations implicitly captured from human behaviour, not (necessarily) the considered aesthetic views of experts. We should, though, not forget they are just that - algorithms. But what of human experts?

Human experts can make mistakes. In 1995, for example, a London auction house failed to recognise a work it was bringing to auction as being by the 17th century artist Poussin and attributed it to a lesser known artist, resulting in an undervaluation of several million pounds.[2] Questions of attribution aside, there are also artists whose work is now highly regarded, but where this was not really recognised at the time; Van Gogh sold very few paintings in his lifetime. Conversely, there are those, lauded in their time, who subsequently slip, or are pushed, from their pedestal. George Watts was widely seen in the second half of the 19th century as a great British painter. Today the standard response to his name is more likely to be not ‘Watts’, but ‘who?’.[3] None of this is to claim that these things exclusively reflect contemporary critical failings or that, had it been possible, computational data analysis, would have changed any of this. It should though make us pause for thought on why what is later considered great art may be overlooked, or praised and then subsequently drop out of sight.

One of the reasons is that we have a tendency to be affected by each other’s views. This is not necessarily irrational. Maybe someone else liking something is an indication of quality. The communal experience is important, and there is also safety in numbers: can you be seen as having good taste if you like something that no one else does? This can though distort outcomes. Research by Matthew Salganik, Peter Dodds and Duncan Watts has examined the effects of social influence on people’s choices about creative content.[4] They created an artificial music market in which over 14,000 people listened to songs by unknown bands. Participants had access to song and band names, and were able to download songs. They were randomly divided into those who saw information on how many people had downloaded a track and those who did not (i.e. their choices should not have been affected by other people's). It was found that where people saw the downloads information there was much more inequality of outcome in that the most successful songs were likely to account for a higher proportion of downloads. Which songs did well and which did badly were also harder to predict where people saw the download information.[5] In other words, our social nature muddies our choices about artistic quality and effects who is successful in unpredictable ways.

It is (in a way) already happening, humans sometimes have some limitations, but there is also increasing evidence that it is possible to use computer algorithms to meaningfully assess art. It is widely acknowledged that originality is important in art, as is being influential. Assessing this is not completely subjective, it requires knowledge of other work which has been done and the technical ability to assess similarity between works. Something which computers can be well placed to do due the vast amount of information on past works they can store and analyse. Using digitised images on over 62,000 artworks from across 600 years recent research by Ahmed Elgammal and Babek Saleh statistically analysed how much works differed from earlier work (their originality) and resembled later work (their influence).[6] Based on this they then calculated a measure of what they term the work’s ‘creativity’ using a variant of the page rank algorithm.[7] Without prompting, the analysis was able to identify what are widely regarded as particularly important works of art: Munch’s the Scream and Picasso’s Les Demoiselles d’Avignon for example had high creativity scores. It also revealed human errors; a painting which it flagged as being particularly creative, was actually found to be incorrectly dated as being older than it was. This assessment has more the character of art history, rather than being a contemporary review.[8] However, as it becomes ever easier to digitise and scan works of art, and undertake ever more sophisticated data analysis, it seems likely that we will learn more and more about recognising originality, influence, and about aesthetics in general.

This is not to suggest that this kind of analysis will supplant human critics and reviewers any time soon, or that statistical analysis is faultless. This research is looking at specific technical questions in particular contexts. The computer analysis is also not completely autonomous, but is operating within a framework set up by the researchers. Part of critics’ importance arises from how they communicate their judgment and vision. It is not for nothing that Alan Turing proposed the ability of a computer to hold a conversation as a test of intelligence (The Turing Test) and despite progress computers can’t yet communicate that well. Nevertheless, we should expect them to be increasingly important in assessing art, not because they are human, but because they aren’t.


[1] Broadly speaking, page rank estimates the relative importance of a webpage based on how many webpages link to a page itself, and how many web pages themselves link to those pages.

Brin, S. and Page, L. (1998), ‘The anatomy of a large-scale hypertextual Web Search Engine’, Computer Networks and ISDN systems.

[2] Evening standard, (2002) ‘Sotheby's put £15,000 price on £4m picture’.

http://www.standard.co.uk/news/sothebys-put-15000-price-on-4m-picture-6330767.html

[3] If you want to make up your own mind about Watts, there are collections of his work in the Tate, National Portrait Gallery and also the Watts gallery near Guildford.

[4]Salganik, M., Dodds, P. and Watts, J. (2006), ‘Experimental study of inequality and unpredictability in an artificial cultural market’, Science.

[5] This was assessed by the people who could see download information being split into independent subgroups (i.e. people within the subgroup would only see the downloads information related to their subgroup), and seeing how well the same songs did across the subgroups. Some songs would do consistently well (or badly) across all subgroups i.e. their success (failure) was predictable, but others would do well in some subgroups and not in others (be unpredictable). However, in comparison to the group of people who made choices autonomously which songs did well (or badly) was harder to predict where there was social influence (The independent choice group was randomly sampled to create equivalent subgroups to create a benchmark to assess this).

[6] Elgammal, A. and Saleh, B. (2015), ‘Quantifying creativity in art networks’, International conference on computational creativity.

[7] The dates and statistical similarity between artworks defines a network between artworks, with the edges between artworks weighted, which is then analysed to work out which are the most creative works.

[8] It will also be picking up the social influence of artists on each other. It seems likely that social influence on critical assessment may implicitly be reflected in which artworks were preserved, and so could be analysed, or in mediating influence between artists.

Author

John Davies

John Davies

John Davies

Principal Data Scientist, Data Analytics Practice

John was a data scientist focusing on the digital and creative economy. He was interested in the interface of economics, digital technology and data.

View profile