Through the essays, we discover what’s good, bad and unexpected about China’s public sector AI innovation.
There are six key lessons that emerge:
1) Rapid AI development is driven by local innovation
China is experiencing rapid innovation in many AI fields, thanks to its strategy of actively cultivating local innovation ecosystems. Local governments in China are empowered to drive forward with experimentation and implementation, and are establishing cross-sector partnerships with industry and academia. In light of the UK government’s regional ‘levelling-up’ ambitions to address geographic economic imbalances by, among other things, boosting research and innovation throughout the UK, China’s innovation ecosystem is instructive. China’s increased R&D spending, combined with devolved local governments that have the power and authority to drive collaboration with technology and research partners, highlights how flourishing innovation ecosystems can be fostered outside of capital cities. There are echoes here of previous calls from Nesta and others for more of the UK’s R&D budget to be devolved to cities and regions to spread the impact and benefits of innovation across the UK. China’s experience would suggest there is merit in pursuing this approach.
2) ‘Experiment first and regulate later’ leads to quick progress
China has made swift progress in practical AI applications because it experiments first and regulates later. China’s agile and pragmatic approach to innovation is resulting in the fast-paced development of AI applications, especially in the field of healthcare, with promising results in areas such as medical image processing and pharmaceutical research. The urgency of health challenges such as an ageing population are universally applicable and there are major gains to be had for governments that can move quickly and find new and effective approaches to tackle such challenges. The regulatory environment in Europe may not allow us to replicate China’s fast-paced approach to experimentation and implementation (and there are reasons why we wouldn’t want it to), but it is worth considering how experimentation methods like regulatory sandboxes and innovation testbeds could be used to adapt this approach for a western context. This resonates with Nesta’s work on ‘anticipatory regulation’ which has explored the role these methods can play in allowing new products and services to be safely tested and brought to market more quickly, and is an approach that has also been called for by NHS Digital in the UK.
3) Innovation isn’t always the result of cutting-edge tech
China’s AI innovation successes are not always due to cutting-edge technologies. Instead, success in China is often down to rapid deployment and scaling of existing AI technologies. China’s innovation model, which prioritises ‘fusion and speed’ over breakthrough technologies, demonstrates that an AI strategy doesn’t have to focus solely on the most advanced technologies in order to be successful. While conversations about innovation in the UK often focus on R&D for new, advanced technologies, equally important is ensuring better adoption of existing technologies – yet the uptake of available AI technologies among UK councils, for example, remains strikingly low. There is clear scope for dedicated efforts to be made in the UK and other European countries to consider how existing AI technologies can be adopted and scaled to improve public services.
4) Good-quality services shouldn’t come at the expense of widening inequalities
While AI has the potential to widen access to good-quality services, it could also end up reinforcing inequalities. China provides a useful lesson of what should be avoided. Developments driven by private sector tech companies and then not universalised by the public sector risk widening inequalities and the marginalisation of certain groups. This is the case with China’s adoption of AI educational tools. With development led by private sector ed tech companies, these tools are often only accessible to wealthier families. Nesta has highlighted the importance of ensuring that innovation tackles inequality rather than increases it, and has called for improved understanding among policymakers about how innovation – and innovation policies – impact different social groups, and for interventions that serve the needs of those who are particularly marginalised.
5) The issue of AI ethics in China is complicated
Contrary to dominant western media narratives, China is talking about AI ethics. Various multi-stakeholder expert committees have been established and have released documents outlining AI ethics principles, many of which align with existing global standards. The international community could do more to engage with China on these issues, and foster greater global cooperation on key conversations about AI ethics and safety. Europe has the opportunity to draw on its strengths and be a co-operation partner on matters such as data protection and AI ethics.
Yet China’s discussions around AI ethics don’t detract from the fact that, as an authoritarian state, there are elements of the Chinese government’s use of AI which are profoundly alarming and violate civil liberties. Privacy laws that protect consumers from tech companies do not limit the government’s access to and use of private data. There are many instances where citizens’ privacy is being infringed, or where the operation of AI systems lacks transparency. Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in Xinjiang province.
China is a global leader in AI surveillance and facial recognition technologies. But companies in the US, France, Germany and elsewhere are also active in developing and supplying surveillance technologies worldwide, and governments in Europe and the US have been criticised by rights groups for their use of facial recognition. The relationship between AI and surveillance deserves deep consideration by policymakers and technologists the world over. The ethical tensions highlighted in this essay collection may focus on China but should give us cause to reflect on fundamental questions about the use of AI wherever it is being deployed.
6) AI is not a panacea
China’s experience also demonstrates that AI is not a silver bullet – challenging the techno-optimism of Silicon Valley. In areas with clearly defined and generally accepted outcomes, such as smart city traffic optimisation, AI is a promising tool to improve the efficiency and reach of services. However, in areas of social life that are complex, unpredictable or subject to debate – such as education or the judiciary – AI may not be a well-placed or desirable tool. In these areas, fundamentally human qualities such as flexibility, emotional intelligence, contextual awareness and moral judgement remain vital. Here AI may be able to play a useful role as a decision support tool but should not replace humans.