Monday, August 22, 2022

The government's "Online Safety Bill": form of censorship?

The very words "Online Safety Bill" should be a red flag. 

Ever-greater state power is draped in the language of protection and safety. In order to be protected by the state, as the logic goes, we must surrender some autonomy; and that way, everybody can be more effectively controlled and thus "safe". So, it's really a cliche because the other side of coin is 'control'.

Nevertheless, this traditional state power has been 'delegated' or 'devolved' to tech companies. Although separate from the state, they effectively give effect to state power. It's an interesting change in the political landscape and allows for a level of censorship that a government could not ordinarily bring into effect - without express Parliamentary authorisation. And that makes it a little bit different. Repeated calls have been clamouring for someone to "do something" about misinformation, online trolling and abuse, and child safety which are very popular with the electorate.

The problem is that there is no way of truly controlling interaction between people over the internet, with a view of eliminate ostensible harms, without diminishing a level of the interfacing. But, like other forms of prohibition, alternatives will emerge to provide the same original service – e.g. VPNs. I have no idea how effective age verification checks would be on websites, but I imagine – as usual prohibitions have shown – they incentivise more elaborate means of evading 'checks' to inappropriate websites. It seems to me that the more one seeks to try to control these things, the more likely a different end will be accomplished.

✲✲✲

The heart of this bill, when it comes to users, is to put the onus on tech companies to "protect from harmful content" as well as illegal stuff. But, how can anyone protect us from "harmful content"? What exactly is considered 'harmful'? Can entire subjects be framed as "harmful" on account of their controversy or inconvenience. Companies would be placed in the invidious position of picking sides to a controversy (or even an argument) and picking the people deemed 'correct' or 'fit' to engage in it. Enormous AI systems would be needed which would be inapt to recognise subtleties and shades, and thus blanket-rules will be introduced by the tech companies to 'protect' us. And as we have seen in recent artificial intelligence, they are only ever as good as their design and architecture and carry the inherent biases of their developers (see: New York Times, Who Is Making Sure the A.I. Machines Aren't Racist?).

The new so-called "duty" creates an enormous range of obligations which are unworkable for normal businesses other than the tech giants. Since this new duty entails enormous penalties, tech companies would be enormously empowered to minimise litigation and fees, and will lean on the 'better safe than sorry' approach with a heavy-handed clamp down. As Matthew Lesh has written, it will involve a pre-scanning of user messages before uploading and then a determination about what the company believes might be illegal. Further:

What is amazing is the sheer audacity and scale involved. The burden in companies must be incredible. Also, the proposed increase to OFCOM's remit must be hugely costly and onerous for the purpose of regulating websites. 

✲✲✲

Lord Sumption's first-class criticisms of the Online Safety Bill is also well worth a full read = The hidden harms in the Online Safety Bill.

No comments:

Post a Comment