In virtue of my heretically archaic views about biology and the importance of women’s rights, I’m the target of quite a lot of rude online behaviour. The other day, for instance, I learnt I was lucky I hadn’t been hanged yet. So you might expect me to be emphatically in favour of attempts to remove what the Online Safety Bill calls “harmful communications” from the internet.
This isn’t the case. In fact, I think the Bill’s proposals on this kind of internet content are a dog’s dinner. If implemented, they will undoubtedly suppress desirable levels of freedom of expression on the internet, and cause more problems than they resolve.
Following scrutiny from the Joint Committee, the Bill — which received its second reading in the Commons yesterday — takes recent Law Commission proposals to introduce a “harm-based” communications offence, and places a duty of care on internet providers and websites to restrict content which meets the definition of this proposed offence, give or take a few tweaks. Specifically, they will be required to restrict any content where there’s a “real and substantial risk that it would cause harm to a likely audience”, the sender “intended to cause harm to a likely audience”, and the sender has “no reasonable excuse for sending the message”. Harm is defined as “psychological harm amounting to at least serious distress”. What counts as a “likely audience” comprises whichever individual is reasonably foreseen as encountering that content.
The flaws here were also present within the Law Commission proposals that inspired the Bill. Take the criterion of “psychological harm amounting to at least serious distress”. As many have noted — though apparently not in Westminster — concepts such as “psychological harm” and “distress” are moving targets, semantically speaking, in the sense that the sort of thing they refer to changes over time. For instance, in a society whose primary concern is with the alleviation of negative experience, concepts associated with negative experiences tend to expand their semantic range and become increasingly diluted. So for instance, over time, the category of “abuse” has moved beyond physical events to include emotional ones as well; and the category of “trauma” has extended from atypically catastrophic life events to relatively common happenings like childbirth and bereavement.
At first glance, “experts discover new form of trauma!” looks reassuringly scientific, a bit like “experts discover new kind of dinosaur!”. But whereas the existence of a dinosaur is completely independent of the activities of the experts who discover it, this is not the case with trauma. The more generous experts are willing to be in their definitions, the more people will then count as traumatised; the more people who count as traumatised, the more people will define themselves in terms of membership of that group, and so become more able to exert political pressure on others — including upon the experts themselves, of course — to recognise the precise nuances of their suffering.
This is one of what philosopher Ian Hacking has called the “feedback loops” within psychological classification. Of course, this trajectory towards dilution is not inevitable, and partly depends on wider political sensibilities within a given society. In a culture which prioritises personal resilience towards negative experiences rather than their automatic accommodation, the sphere of traumatic events might ultimately contract rather than expand. But we don’t live in this kind of a place.
As with trauma, so with psychological distress. Things that were felt as minor ripples centuries ago, if at all — misgendering, for instance, or cultural appropriation, or the mere mention of a slur in quotation marks, or even just free speech arguments themselves — are now crushing blows for many. This isn’t to deny that strongly unpleasant feelings are generated, even as the category of distressing events and experiences expands. Feelings have a habit of rushing in to fill whatever gap has been culturally opened for them.
The point for present purposes is that the number of forms of communication that meet the Bill’s criterion of risking psychological distress is getting bigger all the time. And as the list of things to be distressed about grows by the year, the range of socially permissible speech contracts. One author cited in the Joint Committee report inadvertently underlined this point, insisting that the Bill should “ensure that a broad range of forms of online abuse are acknowledged (e.g. including, but not limited to, ableism, ageism, racism, sexism, misogyny, xenophobia, Islamophobia, homophobia, and transphobia)”. Such a sentence would be incomprehensible to readers even 20 years ago.
And it’s actually worse than this: for the type of content to be removed, according to the Bill, is not that which actually reasonably causes harm in the form of severe psychological distress, but that which is anticipated by others may do so. This point needs to be considered in tandem with the apparent assumption of legislators that harmful internet content is most likely to be aimed at people in virtue of their race, sex, gender identity, religious affiliation, and so on — that is, in terms of membership of protected groups such as those listed in the Equality Act.
The problem here is that the methods likely to be used by internet providers to gauge what would distress a particular protected group are bound to be clumsy. When trying to work out how some group feels, many of us project onto them the feelings we think they should have, or the feelings we think we would have, rather than the feelings they actually do have. And when it comes to trying to reconstruct the feelings of members of minority groups of which we personally are not members, it seems that many of us err on the side of moral caution, and go dramatically over the top on their behalf, anticipating any anticipated sleight as a deeply felt wound without checking we are right. Things won’t necessarily be improved in this regard by internet providers consulting campaigning groups and charities who claim to speak for a given protected group — as the Bill indicates in at least one place that they should — because many of these have a huge financial interest in giving as dramatic an account of a protected group’s feelings as possible.
Perhaps the Bill’s authors believe this all can be easily overcome by the insistence that the sort of content to be removed is only that which is intended to cause psychological distress to a likely audience. This is a false dawn, however. Gauging the intention behind a given statement — what the author meant by it, and what effect they meant to produce by it — requires a relatively deep understanding of linguistic context. But this is the internet we are talking about, where often you only know an author’s name and sometimes you don’t even know that. Character limits truncate explanation, edit buttons are often in short supply, and disembodied statements float around the internet. The Bill even tells us that “an offence may be committed by a person who forwards another person’s direct message or shares another person’s post”, but this ignores multiple ways in which the intent of the original author and the intent of the person forwarding or retweeting it might diverge.
Here, too, given the current cultural climate, providers are likely to err on the side of caution and ban or restrict rather than leave alone. And here, too, there’s likely to be expansion not contraction in the frequency with which a statement is judged as possibly indicative of the intention to distress. For just as the concept of distress is subject to feedback loops which shift it unstably towards semantic dilution, so too judgements about the presence of an intention to distress are a moving target, depending as they do on prior judgements about what other people conceivably might find distressing. Fifty years ago, it would have been pretty much inconceivable that saying “men can’t be women” could be distressing to anyone, and so equally inconceivable that someone could seriously intend to distress another person by saying this. These days, you can get banned from Twitter because of it.
If their recent behaviour proves anything, it’s that these online giants don’t need another reason to crack down on content they think will distress certain groups of people. After all, they are already doing this, albeit in a way which reflects the priorities of Silicon Valley technocrats: putting up pictures of penises is fine; saying women don’t have penises isn’t. The result is an impoverishment of public conversation, a sense of burning injustice on the part of many good-faith conversationalists, and huge and unnecessary toxicity between identity groups. Delivering people from psychological distress is the business of therapists or priests, not lawmakers.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com