Quantum computers promise to be truly disruptive, for better and worse. Should we be doing more to get ready?
What will be the most revolutionary technologies of the coming decades? Alongside artificial intelligence (or the bundle of techniques that we lump together under that banner...), CRISPR, computer-brain interfaces and autonomous vehicles, another hugely disruptive technology is approaching: quantum computing.
Quantum computers operate in a fundamentally different way than ‘classical’ (normal) computers, making use of quantum phenomena like entanglement - what Einstein called ‘spooky action at a distance’ - and superposition - simultaneously being in more than one state at the same time.
These weird phenomena have no close analogies in the classical world, which makes them particularly hard to grasp. However, several decades of experiments have shown that, whilst we struggle to interpret quantum theory in ways we can easily understand, it makes remarkably accurate predictions.
In a classical digital computer, bits of information are either ones or zeros; switches are either on or off. To give a specific example, a register of 8 classical bits must be in one state out of 256 possible ones (ranging from 00000000 to 11111111 in binary) at any given time.
However, a quantum computer operates using quantum bits, or ‘qubits’, which can exist in a superposition of states. A register of 8 qubits can potentially be in 256 possible states simultaneously.
In theory, it is then possible to perform operations on this register which, in effect, are simultaneous operations on all its possible values. Whereas a classical computer would need to run the operation 256 times in succession to test all possibilities, a quantum computer could undertake all these operations at the same time, in parallel.
One deeply imperfect analogy is to think of solving a maze. A classical approach can be thought of as sending a robot to follow a path until it reaches a dead-end, and then starting again, trying path after path until it finds the way out. A quantum approach, in contrast, might be better envisaged as pumping smoke into the maze, and then identifying the probable solution by detecting where the smoke is flowing; all possible routes are ‘tested’ simultaneously rather than sequentially.
(This is not a great analogy, but all analogies with quantum mechanics are necessarily imperfect and invariably misleading…)
Significantly, the advantage in such simultaneous processing power grows with every additional qubit: a nine-qubit register might, theoretically, allow 512 possible conditions to be evaluated simultaneously; a 49-qubit register would give nearly 563 trillion. This means – summarising very crudely – a potential 563 trillion-fold speed advantage over an equivalent classical computer, at least for certain operations.
The implications of this are potentially vast.
There is still an active debate among scientists about exactly where quantum computers might have an advantage (partly because there’s disagreement about how hard some classical problems really are). However, there are thought to be specific types of problem which would take the age of the Universe to solve with the most powerful of today’s classical computers, but which will be theoretically soluble within minutes or hours using a modest quantum computer.
One broad application area is expected to be process optimisation. For instance, industrial processing plants may have hundreds or thousands of parameters (like temperature, pressure, time, concentration, etc.) that need to be finely tuned in order to maximise the purity or minimise the cost, say. The complexity means that conditions are often set using a process of trial-and-error, with no guarantee that the plant is operating optimally.
Quantum computers may, in principle, be able to explore the range of possible combinations and permutations in a way which is simply not possible using classical computers, promising huge productivity increases for a wide range of industries.
Related problems (related, at least, in terms of the underlying mathematics) include optimising logistics and better searching of unstructured databases (which, along with the prospects for improving AI, is one reason why Google is very active in this space).
A second broad application area concerns quantum simulation. There are many areas where scientists would like to be able to model behaviour on an atomic level – including drug development, biological processes and new materials. Many universities employ high performance supercomputers for just this purpose. But even the largest supercomputers struggle to model anything but very basic systems with the accuracy that is needed.
Quantum computers promise revolutionary breakthroughs in numerous scientific fields, ultimately giving rise to things like better medicines, improved batteries and materials with hitherto impossible properties. Using quantum computers to simulate quantum mechanical processes themselves is expected to yield significant scientific advances.
The applications above are so wide-ranging that many people expect quantum computers, when they finally arrive, to deliver truly staggering benefits.
But there may also be a darker side.
Another function where quantum computers are expected to excel is factorisation - breaking down a large number into the product of smaller numbers. Although this may sound like a boring theoretical problem, the relative intractability of performing this using classical machines is fundamental to many public-key encryption systems, like RSA, which depend on some mathematical functions being effectively ‘one-way’.
(Most of us could, with pen and paper, multiply together two large prime numbers, but would find it significantly more resource-intensive, if not impossible, to start with the end-result and work backwards.)
Such encryption is fundamental to today’s digital society.
Whenever one uses VOIP, webmail, online banking, any form of e-commerce or any other website using ‘https’, an encryption protocol keeps that connection secure. Similar techniques are also widely used to create digital signatures that verify files’ contents, securing automatic updates, as well as being used in the hashing functions of numerous blockchains and password verification systems.
Unfortunately, quantum computers are expected to break much of this, leading some to label them ‘tools of destruction’ that will lead to an ‘extinction event’ for information security.
Vastly easier decryption makes it vastly easier for criminals to steal our secrets and our money, as well as unravel blockchains.
(This is not just a matter of breaking Bitcoin; just imagine the value of altering blockchain land-registries if they have become the definitive record of who owns trillions of pounds-worth of real-estate. Or the chaos that could be caused by a malevolent organisation unravelling a State’s blockchain-based record of births, marriages and driving licenses.)
‘Quantum resistant’ encryption techniques and blockchains are being developed, based on types of mathematical problems where quantum computers are not (currently) thought to have an advantage. However, changing the cryptographic algorithms of hundreds of millions of websites – not to mention that embedded in tens of billions of connected devices across the world – will be a huge feat.
In addition, there is virtually nothing that governments can do to prevent the inevitable revelation of encrypted emails from the recent past, which the senders thought would be secure for a millenium.
The potential for diplomatic embarrassment is huge.
But is this really going to happen in the foreseeable future? Scientists are divided.
Optimistic observers point to the fact that, compared with nuclear fusion - where a common joke in the field is that 'fusion is 30 years away and always will be” – consensus has typically revised downwards the estimated time to ‘quantum supremacy’ (the future, and rather poorly defined, landmark when quantum computers are able to solve some problems faster than classical ones).
Certainly, the number of qubits in prototype chips keeps growing: Intel surprised many observers in 2018 by unveiling its 49-qubit quantum chip named ‘Tangle Lake’; Google’s quantum computing chip, codenamed ‘Bristlecone’ has 72 qubits; Fujitsu and D-Wave have chips with 1024 and 2048 qubits, respectively (but of a very different type that is not only difficult to compare but also quite controversial within the academic community). Related patent applications have rocketed in the past 4 years.
Significantly, access to prototypes is already being made public: IBM is already offering researchers cloud access to its quantum computers today, as are Alibaba, IonQ and the much smaller startup Rigetti. Microsoft is expected to follow suit shortly. And dedicated quantum computing software companies are already being formed, in anticipation of the hardware developing.
(For the immediate future, the most likely uses are for research groups trying to understand possible future applications and develop better algorithms. However, Volkswagen is already experimenting with D-Wave's quantum computer to predict traffic patterns in Beijing, whilst IBM’s CEO recently announced that he expected to see income from quantum computing “probably in a two- to five-year time frame”.)
As a result, several communications and cyber-security organisations, including the NSA, NIST, ETSI and NCSC think the future threat is sufficiently real for them to publicly urge moves towards ‘quantum safe’ encryption as soon as possible.
Yet sceptics remain.
Several academic researchers believe that the field is too filled with hype and fantasy. They claim that incentives (both within the commercial and public sectors) are leading research teams to over-promise, whilst also causing research-funders to under-scrutinise.
Some physicists have also privately questioned whether the influx of venture capital into the field, from investors understandably keen to get a slice of a genuinely radical technology, is distorting perceptions. Universities and public funders often see private funding as an opportunity for leverage or a sign of validation, and private funders do similar; yet as with any positive feedback loop, the system can sometimes run-away.
(This seems especially the case where there is an ultra high-risk/high return investment case, yet experimental validation or proof of concept is difficult, and there are a limited number of expert academics.)
Despite corporate press releases trumpeting the number of qubits, the usefulness of any real-life machine depends not just on quantity but also on quality. Real machines need to preserve qubits’ ultra-fragile quantum entanglement and superposition, and deal with the errors that inevitably arise when these break down. This is a major technical obstacle.
Moreover, despite existing quantum prototypes, there is no undisputed demonstration of quantum supremacy for any problem at all, even for super-idealised cases specially designed around quantum systems. (There is at least one substantial prize, which remains unclaimed at the time of writing.)
So what should policymakers be doing to prepare for, or hasten, the arrival of quantum computers?
Regulation is one concern. The export of high-performance computing is already controlled (under the Wassenaar Arrangement) to prevent proliferation by criminals and hostile states. And at the end of last year, the U.S. Commerce Department's Bureau of Industry and Security gave notice that it was considering controlling many more emerging technologies, including quantum computers.
Can any government afford to wait for an RSA-breaking computer to appear before considering control, or should it step in before that point? As Geoff Mulgan has blogged previously, timing poses an almost “inescapable dilemma” for regulators, who are too often caught between the need to act swiftly in rapidly-developing industries, and the desire not to quash emergent sectors.
In the case of quantum computers, it is notable that despite the reservations above, many of the sceptics still agree that quantum computers are coming at some point. But since most experts are themselves quite heavily invested, realistic evaluation of timescales is hugely difficult.
Moreover, once quantum computers appear, will it be possible not to stifle innovation for good while preventing their use by bad actors intent on criminal activities or destabilising states? This is a classic ‘dual-use’ headache for governments, with huge stakes.
A different question is the systemic change that might be precipitated. Computing has already undergone several broad changes from mainframes to PCs to cloud and mobile. Depending on which hardware comes to dominate (many require highly controlled environments, often with expensive refrigeration), is it possible that we might, at least in the early days, revert back to a much more centralised system where users rent time – like the diamond light source or the large hadron collider?
This, in turn, would mean obvious questions about dependence and political leverage: would the UK government try to push through measures like the digital services tax, say, if the only useful quantum computers were also in the US?
One way to reduce future dependence would be by nurturing our own home-grown industry. The UK already has a National Quantum Technology Programme, and a network of Quantum Technology Hubs, of which quantum computing forms a part. Last Autumn’s Budget also committed £35 million towards the establishment of a National Quantum Computing Centre. However, whilst this is not insignificant, by comparison China is reported to be investing $10 billion to build a new National Laboratory for Quantum Information Sciences by 2020.
The combination of AI and quantum computers is another area where we will need to devote attention in due course. Quantum computers promise to make AI systems vastly more powerful, but potentially more opaque. (Almost by definition, for areas where quantum computers may have an advantage, it will be difficult, if not impossible, to simulate their processes with classical systems).
What else should we be thinking about? If you have a view on how quantum computers will shape the world, you might like to consider submitting an essay for our 'Tipping Point Prize' , which is focused on the next decade’s breakthroughs in technology and science. Submissions are due by Friday 15th March 2019. For more information, see the launch blog here.