About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Shadow of the smart machine: Would it be wise to create an ‘Intelligent Gun’?

Learning machines are capable of working ever more autonomously on ever more complex tasks. In this blog, Muz Janoowalla explores whether it would be smart for humankind to develop an ‘intelligent gun’.

There are an estimated 875 million civilian, law-enforcement, and military firearms in the world, of which 650 million are in the hands of civilians, either legally or illegally.

Given the plethora of high-profile gun attacks in recent months and years – particularly in the US, but also in France, Norway, Pakistan and Tunisia, to name but a few – it is disturbingly easy to imagine gunmen on the loose in a school or at a public event, shooting indiscriminately and leaving casualties in their wake.

Imagine how different things could be if a gun had artificial intelligence built into it, turning it into an intelligent gun. If this equipped the gun with the ability to recognise that the shooter did not have a licence, that it was away from its permitted shooting range, or that it was in a school or a densely populated urban area, it could autonomously take the decision to refuse to fire. Imagine how many innocent lives could be saved if artificial-intelligence technology was applied in these kinds of ‘rogue-gunman’ scenarios by preventing guns from firing on civilians.

It is easy to see the opportunities that artificial intelligence, cognitive computing and machine learning could offer in stopping humans from making bad decisions. It could, in theory, prevent the firing of weapons illegally – and even eliminate human error and the accidental discharging of weapons.

But with opportunities also come challenges. What if an intelligent gun is capable of firing of its own volition? Who then assumes responsibility for the weapon’s actions? How can we secure intelligent guns and prevent them from being hacked by third parties with malicious intent. Extending this to a combat situation, what if an enemy was able to take control of an army’s weaponry, rendering them unable to fire back and respond to attacks?

Whilst it would be irresponsible and wrong to suggest that we could ever make guns completely safe, it is certainly conceivable that they could be made safer with the addition of artificial intelligence that only allowed them to fire if certain criteria were met.

In North America, the Canadian armed forces are currently exploring the development of a smart military-assault rifle, whilst police departments in Santa Cruz County, California, and Carrollton, Texas, have also begun to test smart-gun technology.

In my view, governments are right to explore how ‘intelligent’ gun technology can make the world safer – but they also need to address and resolve issues of responsibility, security and ultimate control of the intelligent machine as they navigate towards the development of ‘intelligent guns’.

Disclaimer: the views and opinions expressed in this blog are those of the author and do not necessarily reflect the official position of Accenture.

This blog is part of the Shadow of the Smart Machine series, looking at issues in the ethics and regulation of the growing use of machine learning technologies, particularly in government.

Author

Muz Janoowalla

Muz is the Digital Policing and Analytics Lead within Accenture’s Global Policing Business.