‘It is hard to overstate just how sinister the Online Safety Bill is. The gravest threat to freedom of speech since section 5 of the Public Order Act 1986, which criminalised “insulting words and behaviour”? That scarcely does it justice. Let’s settle on the most serious threat since the proposal to force state regulation on the press in the aftermath of Levison.
The Online Safety Bill, which has already had its second reading in the House of Commons, is intended to make the UK the safest place in the world to go online. If you think “safest” is code for “most heavily regulated” you’re not far wrong.
The Bill will empower Ofcom, the broadcast regulator, to fine social media companies up to 10 per cent of their global turnover if they fail to remove harmful content — and not just harmful to children, which is hard to argue with, but to adults as well.
What does the Government mean by “harmful”? The only definition the Bill offers is in clause 150, where it sets out the details of a new Harmful Communications Offence, punishable by up to two years in jail: “‘harm’ means psychological harm amounting to at least serious distress.”
But, confusingly, it won’t just be harmful content that meets this definition that the bill will force social media companies to remove. After all, this relates to a new criminal offence — and content that meets the threshold for prosecution under this new law will, by definition, be illegal. Notoriously, the Bill will also force social media companies to remove “legal but harmful” content — and exactly what that is, is anyone’s guess. I’m sure political activists and lobby groups claiming to speak on behalf of various victim groups will have a lot to say about it.
The bottom line is that stuff it is perfectly legal to say and write offline will be prohibited online. And not just mildly prohibited — YouTube or Twitter or Facebook could be fined of up to 10 per cent of their annual global turnover for a transgression — so in Facebook’s case $11.7 billion, based on its 2021 revenue.
That’s a powerful incentive for social media companies to remove anything remotely contentious — and they hardly need much encouragement. Facebook deleted 26.9 million pieces of content for violating its Community Standards on “hate speech” in the first quarter of 2020, 17 times as many as the 1.6 million instances of deleted “hate speech” in the last quarter of 2017.
More than 97 per cent of Facebook’s purged “hate speech” in the last three months of 2020 was identified by an algorithm and removed automatically. It’s a safe bet that the sensitivity dials on the algorithms social media companies use to censor questionable content will be turned up to 11 if this Bill ever becomes law.’ More of this article at https://thecritic.co.uk/issues/june-2022/why-i-fear-this-censors-charter/