Epileptics being maliciously exposed to flashing images. Children being bullied via social media. Adults being misled by material about medical interventions. Since it was conceived by academics in 2018, the Online Safety Bill — which returns to Parliament this week — has expanded in scope to cover very disparate potential harms. It has not, however, resolved the fundamental problems with the Bill’s approach to the online world.

Since we now do so much online, governments obviously feel that the law should apply to the online world as it does to the offline world. In general, it already does. The Malicious Communications Act of 1988, for example, covered both online and offline communications. Existing laws can also be used to prosecute crimes that institutions and companies might commit in the virtual realm — including fraud, libel, or failing to uphold their own professional standards.

But the translation between the actual and the virtual isn’t seamless. Technology means we do things differently. Instead of a pub conversation which is heard in context, in person, and only by those present, social media conversations can be spread to an audience of millions and survive for decades. School bullies can message their victims day and night. News and opinions can be found with the twitch of a thumb, with no editorial board responsible for tone, taste or checking of sources.

So the Online Safety Bill,  promises to establish “a new regulatory regime to address illegal and harmful content online”. It would do this by imposing legal duties on providers of specific internet services: search engines, platforms for pornographic content, and “user-to-user services”.

Video Sharing Platforms (VSPs) established in the UK are already subject to Ofcom regulation, which requires them to protect the public from illegal content, and under 18s from “videos and adverts containing material that might impair their physical, mental or moral development”. This means TikTok, Vimeo and OnlyFans are all covered. The new legislation would expand both the type of user-to-user platform, and the scope of content, that might be deemed harmful. It especially targets the kind of user-generated content that has not already fallen under the regulatory net. However, rather than the Government trying to explicitly regulate what is said, shown and shared online, it plans to delegate that task to the platform providers, overseen by Ofcom through agreed codes of conduct.

From the start, the language of the Bill has been that of harm, safety and danger. Service providers must carry out risk assessments, and then have a “duty of care” to take reasonable steps against these risks. This approach was first proposed by Professor Lorna Woods and William Perrin in 2018, drawing on health and safety regulation and the “precautionary principle”, which emerged from German environmentalism in the Eighties, and essentially preaches: first ask about the risks of harm, rather than unforeseen opportunities. Woods and Perrin laid the blame for the internet’s worst features at the door of companies who designed systems to engage ever-larger audiences and make ever-larger profits. They regarded regulation as an opportunity to force those companies to pursue a “harm reduction cycle” — and a more ethical approach to the wider impacts of their products and services.

Woods and Perrin’s proposals, on which the government’s White Paper drew heavily, discussed a wide range of harms, from physical abuse of children to damaging democracy itself. The problem is that, as Lawyer Graham Smith has put it, “speech is not a tripping hazard”. The effects of online content are subjective and social, stemming from an interaction between the post, the person who reads it, and what that person goes on to say or do — to the extent that the law has to take into account the intention of the speaker, writer or poster. The Duty of Care is therefore a completely inappropriate model for improving the impact of technology on our social interactions: somebody responsible for a building has a duty of care to maintain it, to avoid causing injury to visitors, but not to control what those visitors say to one another while in the building. “Duties of care owed by physical occupiers relate to what is done, not said, on their premises,” as Smith puts it.

The kinds of harm described in the draft legislation include very nebulous and subjective harms: psychological distress, or offence caused to a group that is (perceived to be) the target of a joke, for example. This means that putting the onus on technology companies to predict the risk of harm, especially from user-generated material that may not be illegal, is bound to have negative effects. Context, nuance and irony are stripped away on social media. Under the threat of large fines, companies will be incentivised to adopt the precautionary principle and get rid of anything potentially troublesome.

Many social media platforms already remove, hide or downplay content that contravenes their own rules or tastes, shaping the kinds of conversations that are possible in their online “public square”. The Government’s proposed codes of conduct now demand that these platforms reduce the risk of poorly-defined harms to unspecified people. The “duty to have particular regard to the importance of protecting users’ right to freedom of expression within the law” will be far too vague to enforce.

Some of the problems the Bill has faced are very practical ones. It is much easier to say that children should be protected from seeing inappropriate material online than it is to design and implement a reliable system to keep them from it. Bullying takes many forms that can be hard for other people to recognise, let alone automated systems. But the Bill’s fundamental flaw is the philosophical approach that sees online human interactions primarily in terms of danger and harm: something to be solved through risk assessments and the precautionary principle.

Unfortunately, though this Bill is specifically about the digital world, it reflects some wider trends. We’re too inclined these days to see human interaction as inherently risky — something that needs to be regulated, lest it cause harm. Asking service providers to supervise human exchanges on their platforms, as if they’re dinner ladies in a school playground, is simply a formalisation of our general unease with the unforeseeable nature of human communication.

One of the reasons so many interpersonal interactions moved online, even before the pandemic, was that virtual communication is seen as less risky. Dating via an app is more controllable than walking up to strangers in bars, trying to think of a good opening line despite the possibility of face-to-face rejection (or getting into a conversation you’re not enjoying, and from which you can’t politely extricate yourself).

Our idea of harm has inflated — a fact reflected by the growing tendency to regard unwelcome disagreement as intrinsically damaging. And if disagreement is seen (and felt) as an attack on identity, those who feel attacked will call for the removal of content they don’t like. Enforcing an online regime driven by the precaution against the risk of harm is therefore either impossible — or the path to destroying any pluralist public sphere.

The idea that the internet can be re-engineered around a driving principle of reducing risk is, in my view, completely unrealistic. But it appeals to policymakers and others who view the offline world as beyond control. Even before the pandemic, the precautionary principle was taking over our politics.

Online life is already less risky than offline. The real world has no off button. Face to face conversations can’t be abandoned without notice or with a BRB emoji. Physical danger of harm, accidental or deliberate, is confined to the offline world, though the online world can facilitate it.

But, like the real world, the virtual one cannot be made completely safe. A risk-free, supervised, inoffensive online world is a fantasy. This confused Bill is a distraction that should be ditched, in favour of law-making that focuses on specific activities — ones that can be defined, detected, and prevented or punished. There is plenty wrong with the form that online society currently takes, but most of it is the kind of social problem that Parliament cannot fix.

view comments

Disclaimer

Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.

We would be glad to have your feedback.

Buy Me A Coffee

Source: UnHerd Read the original article here: https://unherd.com/