Meta faces accusations over content moderation in Israel-Hamas conflict
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Meta has been accused of being too heavy-handed in its moderation of content related to the Israel-Hamas conflict and was urged to offer greater protection to freedom of expression while fulfilling its human rights obligations.
The US tech giant’s independent oversight board, in a report published on Tuesday, found that Meta had overcorrected when it lowered its threshold for automated tools to remove potentially rule-breaking content following the attack on Israel by Hamas on October 7.
“While reducing the risk of harmful content, it also increased the likelihood of mistakenly removing valuable, non-violating content from its platforms,” said the board, citing posts that “inform the world about human suffering on both sides of the conflict”.
As of December 11, Meta had not restored these thresholds — which dictate how “confident” its automated moderated systems have to be in order to take down content — to the higher levels seen before the October attack, said the board, a “Supreme Court”-style body set up by Facebook’s parent.
The report comes as Meta and its social media rivals TikTok, X and Google’s YouTube have faced a deluge of content related to the conflict, including graphic imagery of the war as well as misinformation.
The platforms have also been accused of bias. TikTok, for example, has faced allegations from some US lawmakers that it amplifies pro-Palestinian voices and fails to address antisemitic content.
Dozens of human and civil rights organisations have claimed that Meta, on the other hand, has suppressed Palestinian voices and allowed anti-Palestinian content to spread on its platforms.
In its report, the board urged Meta to “respond more quickly to changing circumstances on the ground, which affect the balance between the values of voice and safety”.
The board, which comprises journalists, academics and human rights activists, also raised concerns that Meta was removing content that might contain evidence of human rights violations, calling for it to preserve such content.
Meta’s oversight board is typically required to take 90 days to respond to appeals but carried out an expedited 12-day review of the two cases.
In one, Meta had taken down a video posted to Instagram showing the aftermath of a strike on or near al-Shifa hospital in Gaza during Israel’s offensive, including injured and killed Palestinian children.
Meta’s systems had removed the post for breaching its policies around violent and graphic imagery. However, the board ruled that the posts had “exceptionally high public interest value” and freedom of expression should allow it to be posted, albeit with a warning label.
A separate case centred around a video posted to Facebook of an Israeli woman begging her kidnappers not to kill her as she is taken hostage on October 7. Meta initially took down the video “due to concerns about the dignity of the hostages as well as the use of such videos to celebrate or promote Hamas’ actions”.
The board ruled that the post should have remained on the platform and criticised Meta for, at a later point, shifting to allowing the sharing of the hostage video only by certain VIP users, stating this raised already existing concerns about “unequal treatment of users”.
The board, which started assessing cases in 2021, issues independent judgments on high-profile moderation cases as well as recommendations on certain policies.
Meta said it welcomed the board’s decision, noting that it had already previously reinstated the content, so no further action would be taken.
The company said that “both expression and safety are important to us and the people who use our service. The board overturned Meta’s original decision to take this content down but approved of the subsequent decision to restore the content with a warning screen.”
Read the full article Here