The next big technological controversy will be about algorithms and machine learning, says Olivier Usher.
Algorithms share many features with previous technological innovations mired in controversy. Just like the introduction of genetically modified crops, vaccines and nuclear power in previous decades, the use of algorithmic decision making has broad social implications, combined with a lack of transparency, accountability and choice.
In 2017, public disquiet about the decisions that algorithms make, the way they affect us, and the lack of debate around their introduction, will become mainstream.
An algorithm is a step-by-step sequence of rules that sets out how to make a decision about something. The concept has existed since the dawn of science (and the word has for a long time too) but it is in the past half-century that algorithms have become so important: computer programs are nothing but complex algorithms, rules that tell computer hardware how to make decisions.
Machine learning is a more recent invention. Programmed to recognise patterns of data and told to promote desirable outcomes, machine learning algorithms effectively rewrite themselves, a form of artificial intelligence. Instead of the computer simply implementing the rules a human programmer has told it, the computer figures out the best way of achieving an outcome it is told to prioritise.
In a short space of time, and with remarkably little fanfare, algorithms have replaced human decisions in huge swathes of life. Some of the decisions are good, some bad, but how these decisions are made is rarely transparent.
The public gets very little insight into how proprietary software works, and for machine learning algorithms, it’s questionable whether even their programmers fully understand how decisions are made, given that the software teaches itself. In one extreme case of mistaken machine learning, an algorithm created to sort photos of dogs from those of wolves actually trained itself to recognise snowy backgrounds instead.
Algorithms and machine learning increasingly make decisions that affect our daily lives, far more important than telling wolves and whippets apart.
Next time you apply for a job, there’s a good chance at least part of the assessment will be carried out by a computer. If you’re unfortunate enough to be prosecuted in the US, an algorithm is likely to recommend to the judge whether or not you’re released on bail. Whether your mortgage application is accepted, or you get a cheap quote for car insurance, the decision won’t be taken by a human. And prototype self-driving cars rely on algorithms to make life-or-death decisions affecting their passengers - as well as other road users.
While removing human biases can make decision making fairer, that is not necessarily always the case.
A machine learning algorithm that has trained itself to identify prospective hires based on how similar they are to successful employees in a company might be a great way of eliminating unconscious bias in your recruiters, or it might simply replicate the biases of your workforce with an added sheen of technological neutrality.
Similarly, there are credible (though contested) claims that an algorithm used to decide who goes to jail in the US is harsher to black petty criminals, and unduly lenient to public menaces who are white.
Self-driving car algorithms are potentially troubling too. There are reasons to worry that biases could be baked into their software. And even if the software is scrupulously fair, one manufacturer has already said the safety of the occupants comes first: reassuring if you are their customer, but rather less so if you plan to walk along one of the roads they drive on.
One vexed topic which burst into the mainstream in late 2016 - without, yet, being blamed on algorithms - is the problem of fake news and social media.
The algorithmic curation of news, and filter bubbles of our own making, combine with a lack of transparency over who produces what content and who checks their facts, leaving many of us wondering if we should believe anyone at all.
In the circumstances, it’s hardly surprising that Donald Trump was able to blame the Google News algorithm for bad press (while benefiting from a deluge of fake news stories himself).
In the coming year, the backlash against algorithmic decisions will begin in earnest. The trigger could take many forms. It could be a politician forced to resign over fake news pushed by a news algorithm. It might be a murder committed by a violent thug released on bail thanks to court software. It might be an employer successfully sued over a discriminatory recruitment system or a pedestrian killed by a self-driving car that’s protecting its passenger.
But it’s algorithmic decision making as a whole that will be in the firing line when the controversy comes to life.
Technologists will be forced to confront the criticism and address some of the more obvious concerns around opacity and bias. As with previous technological controversies, the promise that algorithms can make the world better could be at risk if the response isn’t quick and credible.
The flare-up over fake news has already prompted Facebook and Google to respond with efforts to find technological solutions to this new technological controversy. Similarly, organisations such as the Trust Project are seeking to produce technology that independently ranks the trustworthiness of the media we consume.
Whether we will choose to trust such technological fixes remains to be seen.
If we don’t, expect business to start advertising algorithm-free services, ranging from mortgages approved by real bank managers, to a resurgence of news websites with humans curating the content. (You could even call them ‘editors’.)
Just as customers are willing to pay more for food without genetically modified crops or pesticides, people will place a premium on decisions made by humans if they don’t trust the machines.