About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Regulating 'Online Harm'

The UK Government has today published wide-ranging proposals for the regulation of social media and other online fora.

This follows last year’s House of Lords Committee inquiry into internet regulation ('The Internet: to regulate or not to regulate?’) as well as a related inquiry by a Commons Select Committee into Disinformation and ‘Fake News’.

Both of these committees recently presented their final reports and, in less febrile political times, we might have expected rather more scrutiny and debate of these. The Lords’ inquiry was ludicrously broad, in my view, whilst much of the Commons Select Committee’s felt quite nakedly political; the latter's discussion of whether social media had influenced the Brexit referendum would have been more credible if the 11-strong Committee were not composed entirely of overt Remainers.

Nevertheless, whatever the deficiencies of these inquiries, concerns about social media have been building for some time. Questions have arisen not only concerning democratic interference but also about the distribution of material and images relating to terrorism, political extremism, child abuse and self-harm. Facebook’s appointment of Nick Clegg last October was a clear response to a political environment that was turning more hostile towards social media and ‘big tech’ in general.

The proposed regulatory measures announced today included a mandatory ‘duty of care’ and a responsibility to prevent exposure to ‘illegal and harmful activity’. This duty is to be enforced by a regulator, possibly funded by an industry levy.

The scope is huge, essentially covering the whole of ‘Web 2.0’. It explicitly includes all ‘companies that allow users to share or discover user-generated content or interact with each other online’ – so including not only the major social media giants, but anyone who operates any kind of user forum, messaging system, file hosting, matching service or search engine. ‘Companies’ are specified as including charities, so this site would fall in scope.

The type of content being addressed is also vast, covering not only obviously-illegal acts, like terrorist propaganda and the sale of illegal goods, but much vaguer harms such as ‘trolling’, ‘hateful content’, ‘disinformation’ and even ‘excessive screen time’. Unfortunately, the whitepaper frequently conflates these categories, bundling legal and social issues together.

The cost of compliance will be yet another regulatory hurdle for young firms.

The proposed regulation would result in much heavier censorship than at present, with many firms inevitably filtering out entirely legal material, as well as having to make near-impossible judgements about individual’s means and motives. With no discussion of how civil liberties would be safeguarded, assurances that ‘the regulator will not be responsible for policing truth and accuracy online’ seem quite feeble, given the vague and highly-subjective nature of terms like ‘trolling’ and ‘hate’.

However, aside from the threats to free speech, the proposals also risk stifling the emergence of potentially competing social media platforms, further entrenching the dominance of the existing players. Startups and SMEs are explicitly within scope, and the cost of compliance will be yet another regulatory hurdle for young firms. Moreover, given tech giants’ previous behaviour towards new rivals, it is entirely conceivable that this regulation would be used to take down young upstarts.

On the other hand, there are also significant questions about the ability of any regulator to keep pace with decentralised platforms and the irreversible nature of public blockchains.

But is regulation the only way forward? Are there other ways forward which could better protect freedom of speech and also stimulate more innovation?

In my view, many of the problems with legal content on online fora are exacerbated because of the lack of competition. Facebook and Twitter are effectively monopolies – ‘virtual town squares’ used by large swathes of society. Not only do these monopolies dampen the emergence of innovative ways of addressing online harms, they essentially mean that the companies have to choose one editorial policy to cover everyone.

But, as I’ve asked previously, what would happen if Twitter, say, were not a company but a protocol?

In the same way that a common protocol allows us all to communicate via email, despite different providers, one could imagine a world where everyone could tweet (or whatever the verb would be) using a common, interoperable, micro-blogging protocol, yet use different providers to view our feeds.

If this were the case, we could imagine firms providing various options to suit different users: verification systems and safe feeds for children; completely uncensored feeds for libertarians and free-speech proponents; and any number of creative solutions for everyone else in between. We wouldn’t need to argue over whether a given platform should be more or less censorious, and in what manner, since everyone could choose for themselves.

Issues could be reduced by a plurality of competing organisations.

So why isn't the market providing this now, and how could we make this possible?

First, realising that monopolies are part of the problem, and that the issues could be reduced by a plurality of competing organisations, is the first step. Regulators failed to prevent the dominance of Facebook and Twitter arguably not only because they missed the significance of social media as an emerging sector, but also because they failed to appreciate the importance of network externalities in enabling (or blocking) new entrants, until they were well established.

Second, this plurality of organisations should be interoperable. This means we should promote greater adoption of open, interoperable standards, like OStatus or the newer ActivityPub standard. Mastodon, for example, already adopts these.

Third, we should take seriously the idea of ‘social graph portability’. That is, ownership not just of the content that we write and post, but also the map of the unique digital connections that we create. Owning this data – which could be mandated by a ‘Social Graph Portability Act’ – would mean that our social networks can be more easily recreated on other platforms, and that we could potentially use these ‘reroute’ messages from other platforms towards our preferred choice. Whilst not trivial to do, this idea has been gaining support over the past few years, and it seems a missed opportunity for government not to have looked at this option.

Because so much of our everyday life depends on the internet, it is right that illegal harms can be addressed online in the same way as they would be offline. However, precisely because so much our interaction now takes place online, attempts to regulate legal interaction should be approached cautiously and more imaginatively than is the case at present. Doing so could actually be a welcome spur for innovation, rather than a dampener.

The consultation on the white paper closes at 23:59, 1 July 2019.

Author

Christopher Haley

Christopher Haley

Christopher Haley

Head of New Technology & Startup Research

Chris led Nesta's research interests into how startups and new technologies can drive economic growth, and what this means for businesses, intermediaries and for the government.

View profile