On 25 April I joined representatives from several European cities at a workshop in Helsinki to discuss the thorny issue of artificial intelligence (AI) ethics, hosted by the city’s Chief Digital Officer, Mikko Rusama.
In keeping with much recent debate on this topic, our discussion focused on how cities (and particularly their public sector organisations) can strike the right balance between embracing the many benefits that AI has to offer while ensuring that the risks and potential downsides of this strand of technology are addressed.
Atte Jääskeläinen, Professor of Practice at Lappeenranta University, presented on the topic, “Is ethics of AI somewhat different than regular ethics?”, highlighting the complexity of aligning new technologies with communities’ changing and sometimes contradictory values. Jen Hawes-Hewitt, Global Cities and Infrastructure Industry Lead at Accenture, then shared her thoughts on the theme: “We are all biased,” laying out some ideas and strategies for addressing concerns around AI bias.
An assumption at the start of the workshop was that we were there to create a set of principles for the ethical use of AI to which cities could subscribe.
Of course, many cities, universities, businesses and other institutions have already written their own versions. You can see some of them in this blog post.
Yet as I have previously argued - and as I presented at this session - virtually all such attempts suffer from one of two flaws. Those, like Google’s, that offer broad principles are so high level as to offer almost no guidance on how to act in any specific scenario. Meanwhile, the recommendations in more detailed codes tend to cease to be practical or meaningful as the complexity of an AI increases. And if a code doesn’t work for more complex cases, it will be redundant before it’s ever followed.
But if principles aren’t the answer, what is?
My recommendation has previously been that a better approach is to equip public sector organisations and their staff with a set of questions that they should be able to answer before deploying an AI in a live environment. My best attempt at these questions is below:
My point is not that there is a ‘right’ set of answers, rather that organisations should be able to ‘show their workings’, providing assurance that they have carefully thought through each of these points and put in place appropriate safeguards, oversight and evaluation to ensure risks are minimised. Significantly, this approach places the emphasis on public sector professionalism, enabling staff to make a judgement call about what level of risk is acceptable in different contexts. You can read my reasoning about the importance of this.
Yet after the discussion at the Helsinki workshop, I now conclude that not even this approach is sufficient. To understand why, we need to go back to basics and think carefully about what we are trying to achieve.
Why have so many cities and institutions gone to the trouble of writing codes of standards or principles for the use of AI? Presumably, because of two motivations.
First, they want to be able to benefit from the many useful things can AI can do. AI can codify best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities. That’s quite the list, and it’s in everyone’s interests that public sector organisations can take full advantage of these benefits.
Second, they recognise that there are legitimate concerns that AI could be used by the public sector in ways that invade privacy or cause harm, unfairness and moral wrongs. They know it’s vital that the public can have confidence in how AI can be used, otherwise they risk a backlash. The recent Cambridge Analytica scandal highlighted how the public is waking up to how their data is used, and care about how their personal information and behaviour is analysed and used by algorithms. They therefore want to get their own approach right.
Yet what many cities seem to have failed to appreciate is that to achieve these things, their need is not for lofty, high-level principles but for measures that can genuinely enable people working in the public sector to have the skills and confidence to use AI in the right way, in the right contexts and with the right safeguards in place.
In short, what cities need is actionable guidance that changes behaviour.
Without that, the best we will have are well-meaning documents with statements of intent that are quickly ignored and forgotten.
So what might a better approach look like that would actually provide actionable guidance and change behaviour?
Based on my conversations in Helsinki, I think there are at least four component parts:
1 - Guiding questions: I stand by my argument that offering a practical set of questions is the most helpful thing we can do to guide public servants and their organisations to think through the most important steps of using an AI safely and responsibly in any given context. I have offered the 10 questions above as a starting point. Further work is needed to see how they can be improved and expanded upon.
2 - Appropriate methods: In addition to the guiding questions, I’d argue that certain methods lend themselves to running an effective AI project. Specifically, a double diamond approach of genuine discovery about the problems to be addressed, and then prototyping of a number of solutions that can confirm or deny a hypothesis would be particularly desirable. I say this because - as with all new technologies - there’s a real risk that organisations will set out with the assumption that they somehow need an AI initiative. Organisations will avoid much wasted effort and needless complexity (and potential harm) if they have a robust process of determining what solution actually addresses their problems. If that turns out to be AI, great. If not, they may well find a simpler and less challenging solution works better.
My preference is to add a prior step to the classic double diamond approach, which is to determine what real-world outcome is to be achieved - see the diagram below. The diagram also prompts organisations to think carefully about both the technical and non-technical aspects of the problem and potential solutions, to avoid sleepwalking into a technology-only approach.
3 - Representative people: It’s also vital that the right people are involved in the process of designing and deploying an AI. Lack of diversity in technology-enabled projects is widely considered to be problematic, yet is rarely systematically addressed. Ensuring that a robust process is in place to involve people of different ages, backgrounds, ethnicities and walks of life is important - not least so that the broadest range of perspectives is brought to bear on revealing the true nature of the problems to be solved and the likely impact of any proposed solutions.
4 - Training for skills, attitudes and mindset: To bring all the above alive, it’s not enough for guidance to sit in documents. Working with emerging technologies - AI included - is new for most organisations and they need to have the right skills, attitudes and mindset to make it work. Real thought needs to be given to what those organisational and cultural pieces need to look like for AI projects to work well.
What about legal compliance, ethics and values? Don’t we need some principles for these? Many people propose that these things need to be thought about anew in the context of AI. I disagree.
Every city is subject to existing legal frameworks covering the use of data and public sector activities that must be adhered to. Citizens have guaranteed rights. Our ethics and values are already established - over the course of hundreds, if not thousands of years. Cities’ use of AI necessarily takes place within and must be made to complement that context. If we can’t start with the assumption that public sector organisations will set out to use AI in a way that is legal, ethical and aligned with their values then I’d suggest we have much a bigger issue on our hands.
The diagram below visualises how the four elements of providing actionable guidance sit within an existing context of a city’s laws, ethics and values.
I suspect the time has come when we are reaching the limit of what can meaningfully be discussed in the abstract. I’d urge cities wishing to lead in this area to get out and proactively develop, test and improve prototypes of these questions, methods, training and more inclusive approaches to involving people on actual initiatives to see what works.
The litmus test is whether we can really change behaviour and enable organisations to follow a path of responsible innovation.
Principles were never going to be enough.