When the Comanche chief Tosahwi surrendered to Philip Sheridan in 1869, he described himself as “a good Indian”. “The only good Indian,” the General is said to have replied, “is a dead Indian”. This genocidal epigram has been adapted for use as a provocative assertion of unbending tribal enmity ever since.
The day after Captain Tom died last February, 36-year-old Glaswegian Celtic fan Joseph Kelly substituted “Brit soldier” for “Indian”, in the manner of the IRA, in a tweet. He then clarified his position by adding: “burn auld fella, buuuuurn.” Last week, almost a year on, he was convicted by Sheriff Adrian Cottam of sending a “grossly offensive” message contrary to section 127(1)(a) of the Communications Act 2003. He awaits sentence, and perhaps an appeal.
On the rare occasions when discussion of free speech trials avoids the buffers of perceived partisanship — “You were a Thursday night pot-banger so of course you want to throw the book at him” — it is usually derailed by flitting between two separate questions: “Was the law correctly applied by the court?” and “Should the law be different?” Such flitting can of course be deliberate as well as careless.
Our answer to the first question almost always depends on the meaning of “grossly offensive”. In 2006 the Judicial Committee of the House of Lords held:
“There can be no yardstick of gross offensiveness otherwise than by the application of reasonably enlightened, but not perfectionist, contemporary standards to the particular message sent in its particular context. The test is whether a message is couched in terms liable to cause gross offence to those to whom it relates.”
As to the mental element of the offence: “the defendant must intend his words to be grossly offensive to those to whom they relate, or be aware that they may be taken to be so”. And in 2017 the High Court seems to have clarified that to mean: “taken to be so by a reasonable member of the public.”
So, if a message is liable to cause gross offence to a person to whom it relates, and the defendant is aware that a reasonable member of the public might take it to be liable to do so, that is enough for a guilty verdict.
Now, one could argue that there is no prospect of someone taking offence at a tweet posted after they’ve died — the charge, after all, specified “offensive remarks about Captain Sir Tom”, rather than about British soldiers in general. But the appeal courts would likely find a way to close that apparent loophole — perhaps by establishing a very broad interpretation of “to whom it relates”.
Was Section 127 applied correctly then? Probably, yes. My own reaction to the tweet was, I confess, low-key: somewhere between a frown and an eye roll. But it was clearly liable to cause gross offence to Captain Tom’s family, and the defendant would surely have been aware that a reasonable member of the public might see it that way.
But that doesn’t mean this law is any good. And that’s because it does not rest upon a shared cultural respect for a set of extreme taboos. All manner of commonplace statements are now liable to cause gross offence to those to whom they relate — yet be completely unobjectionable to almost everybody else. Yes, there are interpretative protections about the importance of context and so on, but Section 127 criminalises too much, and too vaguely.
The Indian Supreme Court came to this conclusion several years ago. The relevant parts of Section 66A of their Information Technology Act 2001 were written in near-identical terms to Section 127 of our Communications Act 2003. In striking down the legislation in 2015, the Justices held that “expressions such as ‘grossly offensive’ […] are so vague that there is no manageable standard by which a person can be said to have committed an offence or not”.
Fortunately, we might be about to catch up with the Developing World. In July last year the Law Commission recommended we get rid of Section 127. Its report noted that we lack universally accepted definitions of concepts such as “grossly offensive”; the “spectre of universal standards” should be “rejected”. Arguably, a lack of universal standards does not augur well for society — but we mustn’t deny that this is the state we find ourselves in.
In place of Section 127, the report recommended a new offence: posting, without reasonable excuse, a communication “likely to cause harm to a likely audience”, and the Crown must prove that the defendant intended harm. Harm is defined as, at a minimum, serious psychological distress. This is, on the face of it, a great improvement. And last Friday, the Government published an interim response expressing their intention to add this offence to the Online Safety Bill.
So unlike Section 127, the new offence would require proof of an intention to cause serious psychological distress. And even when a defendant did hold that intention, he or she could argue that the message wasn’t actually likely to be seen by anyone likely to be seriously distressed by it. This could place heavier responsibility on the bigger Twitter accounts, which is as it should be. And “It was supposed to be a joke”, “I didn’t mean any harm by it”, “I didn’t expect it to be retweeted so much”, and even “in the circumstances it was fair enough” (reasonable excuse) could all, if this law is enacted, become relevant parts of a defence.
But there is a twist. The Law Commission’s original plan, in the consultation paper, was that the Crown need only prove that the defendant was aware of the risk that harm might be caused. This is a much lower threshold than intention. The reason for their change of heart, they repeatedly emphasised, is that the Online Safety Bill imposes a legal duty on social media companies to remove harmful content — including content that is not illegal.
So a complaint to Twitter about the removal of a perfectly lawful tweet could be met with the response, “Yes, but the only reason tweets like this are not illegal is because we have to delete them”. Of course, the State delegating its natural responsibilities to private companies is nothing new. But to do so to this extent — narrowing the ambit of an important piece of criminal legislation at the heart of our right to Free Speech — is a worrying development, and does not sit well with the Law Commission’s stated aim of “technological neutrality”, or their intention that new offences be “future-proofed”.
If a behaviour is so wrong that it should be criminal, it ought not to slip through the net simply because global tech companies will be told to prevent it. The Law Commission rightly expressed concern about the “inconsistent application” of the Section 127 offence. But if there is one thing all sides can agree on it is that Twitter’s moderation policies are known for little else.
The high bar of intention to cause harm is a welcome development, and a measure of offensive dross in the digital public square is a price worth paying: for freedom, yes, but also for the sense of freedom, particularly among people at the less free end of the socio-economic spectrum. But instead of requiring social media companies to remove non-criminal material, it might be better — safer, even, in the long-run — if they were prevented from doing so. After all, responsibility for deciding what can and cannot be said should rest, perhaps, with any reasonable member of the public. And most of them are not on Twitter.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com