Curbing the dangers of the online world
Promoting content to children that glamorises suicide, algorithms that suggest terrorist material, clickbait that exacerbates division: the problems attributed to Big Tech companies are easily understood. What is less simple is knowing how to lessen these societal harms without trampling over free speech. On both sides of the Atlantic, lawmakers are grappling with the best ways of making Big Tech more accountable. How they are going about it will inevitably be imperfect. It is important that they still try.
The long-awaited online safety bill, which promises to make the UK the “safest place in the world to be online”, was amended this week after Conservative rebel MPs pushed for the top brass of Big Tech firms to face jail time if they fail to protect under-18s from harmful content. The rebels point to similar liability for bosses under health and safety legislation, and for those in finance. They add that if senior managers who work in sectors that can wreak wider damage face criminal sanctions, then so too should those at the top of Big Tech. But more than the creation of a level playing field, it is about improving behaviour.
Almost certainly — and correctly — any prosecution will have significant hurdles to begin, let alone to convince a jury: it would only come to prosecution after incremental warnings had not been heeded. But the mere threat ought to concentrate minds. Such is the experience in banking: a criminal offence of recklessly mismanaging a bank that fails was introduced in 2016, alongside regulatory measures to improve accountability. The criminal offence is yet to be triggered — no bank has failed since — and there has been scant enforcement of the regulatory penalties. Yet, according to watchdogs, the fear of losing one’s livelihood, or liberty, is enough to drive up standards.
The bill will alter as it passes through the House of Lords. That is no bad thing: the bill is sweeping, flawed and has already changed substantially since 2019. The current iteration has sensibly removed the need for companies to take down not only illegal, but harmful content (unless it applies to children, where legal but harmful content still must be curbed) — a far more thorny determination. Even when it is eventually passed, politicians ought to revisit the legislation frequently, particularly as it pertains to powers granted to the regulator Ofcom to decide where social media companies are falling foul.
But the claim that the UK will be an outlier is wrong. The criminal amendment pushed by the rebels is modelled on existing Irish legislation. Meanwhile, the EU’s Digital Services Act also places similar requirements on Big Tech to remove illegal content.
In the US, lawmakers have been slower to grasp the nettle, even if politicians on both sides of the aisle want to curb Big Tech (albeit for different reasons). President Joe Biden has called for a bipartisan effort to better protect privacy and to diminish the “liability shield” of section 230 of the 1996 Communications Decency Act, which gave tech platforms immunity for content others post on their sites.
The courts are likely to get there first: the Supreme Court will hear two cases that turn on section 230, including claims that Google’s YouTube broke anti-terrorist laws by its algorithms recommending Isis videos. The court’s full docket, along with Elon Musk’s efforts at Twitter to roll back content moderation in the name of free speech, mean that lawmakers, judges and regulators on both sides of the Atlantic will be busy determining just where the line between free speech and harmful content lies. Their determination may be imperfect. It will probably need to be revisited. But it is overdue.
Read the full article Here