Brussels asks Big Tech to counter threats to integrity of European elections

The European Commission has asked X, TikTok, Facebook and other online platforms to mitigate risks to elections and clamp down on voter disinformation, as part of new guidelines adopted on Tuesday.

ADVERTISEMENT

The guidelines – targeted at online platforms with more than 45 million active users in the EU, and therefore designated ‘Very Large Online Platforms and Search Engines’ under the bloc’s pioneering Digital Services Act (DSA) – set out potential measures to tackle election-related risks, harmful AI content, and misleading political advertising.

They also include specific guidelines for June’s crunch pan-EU election, amid fears of increased malign interference and a deluge of misinformation online.

Although the guidelines are not legally binding, the Commission could launch formal proceedings against any platform that deviates from the DSA provisions on elections and democratic processes. This could see the executive slap fines of as much as 6% of global turnover on non-compliant platforms and search engines.

The move is part of a coordinated effort by Brussels to clamp down on the industry’s penchant for self-regulation, which has often been decried as complacent and insufficient, and force Big Tech to do more to uphold democratic values.

A senior EU official said the guidelines were a response to the “threat” to the integrity of elections in the bloc, particularly due to the rapid deployment of generative AI and the spread of misleading deepfake content which “sow divisions” in European societies.

Last October, a deepfake recording of a candidate in the Slovak elections claiming he had rigged the vote went viral, in a clear threat to the integrity of the democratic process.

Under the new framework, platforms will be required to urgently flag such high-risk situations under a new incident response mechanism, and cooperate with European and national authorities, independent experts and civil society organisations to tackle emerging threats.

Another concern for the Commission is the so-called recommender systems: the use of machine learning to prioritise divisive, harmful or misleading content with viral potential. The guidelines require platforms to design such systems in a way that gives users “meaningful choices and controls over their feeds.”

The executive is also on the alert for the potential spread of incorrect election information through AI-powered chatbots, also known as chatbot “hallucinations,” the official said.

In 2023, a study by non-profit groups AI Forensics and AlgorithmWatch revealed that Microsoft’s Bing Chat – recently re-branded to Microsoft Copilot – responded incorrectly to a third of election-related questions. The errors included incorrect information about election dates and candidates, and fabricated controversies concerning candidates.

Safeguards ahead of June’s elections

The adoption of the guidelines is tactically timed ahead of the elections to the European Parliament and follows a period of consultation with platforms, who were invited to provide feedback on the draft.

Several companies say they have already risen to the challenge of introducing election safeguards ahead of June’s ballot. Google, Meta and TikTok have set up so-called election centres to combat misinformation surrounding the vote.

From next month, TikTok is set to send push notifications to its millions of European users directing them to an in-app election centre, where the platform says they can find “trusted and authoritative information” about the vote, as well as “media literacy tips.”

The Commission says it will stress-test the rules with “relevant platforms” at the end of April, but could not confirm which platforms may be required to undertake the tests.

With 370 million eligible voters heading to the polls in 27 member states in June, Brussels fears the resources of platforms will be stretched very thinly due to the need for content moderators who are fluent in the bloc’s 24 official languages.

For instance, X’s latest transparency reports show it employs only one content moderator fluent in Bulgarian, Croatian, Dutch, Portuguese, Latvian and Polish in its global team of 2,294 people. The company has no human content moderators to cover 17 of the EU’s official languages, including Greek, Hungarian, Romanian and Swedish.

This linguistic complexity means that the European elections are “particularly vulnerable,” the senior official said.

The move also comes during the biggest election year in world history, with more than 2 billion voters set to head to the polls.

ADVERTISEMENT

The official recognised that while DSA compliance is “costly,” the cost of applying similar rules elsewhere outside the EU is “marginal” and that therefore platforms could consider rolling out similar safeguards worldwide.

This article has been updated with more information.



Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link