Disinformation dangers lurk in the EU’s media freedom act

The writer is a practitioner fellow at Stanford University’s Digital Civil Society Lab and a former senior content moderator at Twitter

Twitter has recently come under fire for changes in its verification system which attached the labels “state-affiliated media” and “government-funded media” to accounts managed by US public service broadcasters PBS, NPR and the UK’s BBC. After an uproar regarding press freedom, the social media company was forced to update the label to “publicly funded media”.

This furore not only drew ridicule (one former BBC editorial director commented that it looked as if “the work-experience guy” had been left doing the labelling) — it also showed the pitfalls of amateur content-moderation policy. The system of self-regulation with no accountability or transparency has taken the US on a dangerous path. By contrast, Europe’s efforts at regulation, put forth in the Digital Services Act, look promising. But the DSA’s attempt to address systemic risks such as disinformation may be undermined before they have a chance to work: amendments proposed in the EU’s draft European Media Freedom Act would grant full exemption of any content moderation obligation for any organisation categorised as “media”.

I know the problems this could cause, because I used to work inside Twitter’s trust and safety department, writing and enforcing moderation policies. During my time there, from 2019 to 2021, I had a unique vantage point on momentous world events. I saw first hand the devastating impact of social media in stoking the violent attacks on the US Capitol and I later gave evidence as a whistleblower to the US Congress.

The danger is that Article 17 of the EMFA could create a potentially limitless category of bad actors who can simply self-declare themselves as “media” entities. It would be a cheap and easy way for disinformation campaigns to legitimise themselves. At a time when deepfake news outlets are being established, generative artificial intelligence is running rampant in news, and hostile states are actively exploiting social media features such as Twitter’s recent verification changes, we should not create new weapons for information warfare.

If the media exemption in the EMFA passes, content moderators like me will also have our hands tied by law. For example, when Vladimir Putin invaded Ukraine, the Russian state broadcaster RT opened a Twitch account. Inside Twitch, where I worked at the time, conversations immediately began about how to implement the company’s new policy on harmful misinformation. RT’s account was quickly banned.

Under the EMFA, companies will no longer have this freedom. Disinformation outlets claiming to be legitimate “media” would be exempt from swift action and their propaganda amplified.

Instead, regulators need to work with the moderators who have spent years wrangling with the issue of how to verify news outlets and defining concepts such as newsworthiness. They have developed metrics and made mistakes. Their knowledge and experience are the key to getting this right. Through the DSA’s transparency requirements, we can start to fix platforms, make them accountable, and ensure that redress mechanisms are effective.

If there is one thing I learnt in my career, it’s that you can write down any words you want and call them “policy”. But if you cannot evenly enforce those policies in a principled manner, then they are meaningless.

The whole world is watching to see how the DSA works — the EU’s institutions must focus on ensuring this landmark regulation is a success. Allowing the EMFA to create a dangerous and unworkable loophole, on the other hand, is a sure-fire route to failure.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link