Techno Blender
Digitally Yours.

X is failing to remove 98% of Israel-Gaza war hate and misinformation

0 52



X’s substandard content moderation has come into even sharper relief with the explosion of violence in Israel and Gaza over the past five weeks. In fact, a new study suggests that the platform is unable or unwilling to delete posts that defy its own community rules on hate and misinformation.

The nonprofit Center for Countering Digital Hate (CCDH) says it flagged 200 tweets concerning the Israel-Gaza conflict that clearly violated X’s own rules prohibiting racist slurs, dehumanization, and hateful imagery. A week after reporting the violative tweets to X content moderators (using the platform’s official reporting tool) on October 31, the CCDH says it found that 196 of the tweets still remained visible online.

“Our ‘mystery shopper’ test of X’s content moderation systems—to see whether they have the capacity or will to take down 200 instances of clear, unambiguous hate speech—reveals that hate actors appear to have free rein to post viciously antisemitic and hateful rhetoric on Elon Musk’s platform,” Imran Ahmed, founder and CEO of the CCDH, said in a statement.

According to the CCDH, the violative posts were collected from a total of 101 separate X accounts. Of those accounts, just one has since been suspended and another two were “locked.” Together, the posts reported by the nonprofit specifically from those accounts have garnered 24,043,693 views. 

The CCDH provided an assortment of examples of the tweets it reported, most of which promote bigotry or incite violence against Muslims and Jewish people.

The CCDH also found that 43% of the 101 accounts that published the hateful tweets were “verified”—which, under Musk’s leadership, means they’ve paid for a blue check, and with it an appearance of legitimacy, authority, and trustworthiness, as well as wider distribution and visibility of their tweets. The CCDH report follows an October 20 NewsGuard analysis that found some 74% of the most viral posts on X advancing misinformation about the Israel-Hamas war were being pushed by “verified” X accounts.

Musk’s social platform and the CCDH have their own contentious history. In July X sued the CCDH, claiming the nonprofit “improperly accessed data from the platform” in support of a claim that X’s depleted content moderation staff had opened the door to a surge in antisemitic and anti-Muslim hate.  

“This is the inevitable result when you slash safety and moderation staff, put the Bat Signal up to welcome back previously banned hate actors, and offer increased visibility to anyone willing to pay $8 a month,” Imran said in his statement. “Musk has created a safe space for racists, and has sought to make a virtue of the impunity that leads them to attack, harass, and threaten marginalized communities.”

The report from the CCDH is just the latest to call attention to X’s content moderation issues specifically in regard to the Israel-Gaza conflict. Notably, European Commissioner Thierry Breton warned on October 10th that X had become overrun with “disinformation” and “violent and terrorist” content since Hamas’s October 7 attack on civilian communities and a music festival in southern Israel. In response Linda Yaccarino, X’s new CEO, said the platform had removed hundreds of Hamas-linked accounts and taken down or labeled thousands of pieces of content since the unprecedented terrorist attack last month.

When Musk assumed leadership of Twitter (now X) in late 2022, he enacted numerous policy changes that amplified the spread of false, harmful, and inflammatory content. Almost immediately he fired a large proportion of the platform’s content moderation staff and reinstated numerous Twitter accounts that had been banned by previous management for spreading misinformation and/or hate. 





X’s substandard content moderation has come into even sharper relief with the explosion of violence in Israel and Gaza over the past five weeks. In fact, a new study suggests that the platform is unable or unwilling to delete posts that defy its own community rules on hate and misinformation.

The nonprofit Center for Countering Digital Hate (CCDH) says it flagged 200 tweets concerning the Israel-Gaza conflict that clearly violated X’s own rules prohibiting racist slurs, dehumanization, and hateful imagery. A week after reporting the violative tweets to X content moderators (using the platform’s official reporting tool) on October 31, the CCDH says it found that 196 of the tweets still remained visible online.

“Our ‘mystery shopper’ test of X’s content moderation systems—to see whether they have the capacity or will to take down 200 instances of clear, unambiguous hate speech—reveals that hate actors appear to have free rein to post viciously antisemitic and hateful rhetoric on Elon Musk’s platform,” Imran Ahmed, founder and CEO of the CCDH, said in a statement.

According to the CCDH, the violative posts were collected from a total of 101 separate X accounts. Of those accounts, just one has since been suspended and another two were “locked.” Together, the posts reported by the nonprofit specifically from those accounts have garnered 24,043,693 views. 

The CCDH provided an assortment of examples of the tweets it reported, most of which promote bigotry or incite violence against Muslims and Jewish people.

The CCDH also found that 43% of the 101 accounts that published the hateful tweets were “verified”—which, under Musk’s leadership, means they’ve paid for a blue check, and with it an appearance of legitimacy, authority, and trustworthiness, as well as wider distribution and visibility of their tweets. The CCDH report follows an October 20 NewsGuard analysis that found some 74% of the most viral posts on X advancing misinformation about the Israel-Hamas war were being pushed by “verified” X accounts.

Musk’s social platform and the CCDH have their own contentious history. In July X sued the CCDH, claiming the nonprofit “improperly accessed data from the platform” in support of a claim that X’s depleted content moderation staff had opened the door to a surge in antisemitic and anti-Muslim hate.  

“This is the inevitable result when you slash safety and moderation staff, put the Bat Signal up to welcome back previously banned hate actors, and offer increased visibility to anyone willing to pay $8 a month,” Imran said in his statement. “Musk has created a safe space for racists, and has sought to make a virtue of the impunity that leads them to attack, harass, and threaten marginalized communities.”

The report from the CCDH is just the latest to call attention to X’s content moderation issues specifically in regard to the Israel-Gaza conflict. Notably, European Commissioner Thierry Breton warned on October 10th that X had become overrun with “disinformation” and “violent and terrorist” content since Hamas’s October 7 attack on civilian communities and a music festival in southern Israel. In response Linda Yaccarino, X’s new CEO, said the platform had removed hundreds of Hamas-linked accounts and taken down or labeled thousands of pieces of content since the unprecedented terrorist attack last month.

When Musk assumed leadership of Twitter (now X) in late 2022, he enacted numerous policy changes that amplified the spread of false, harmful, and inflammatory content. Almost immediately he fired a large proportion of the platform’s content moderation staff and reinstated numerous Twitter accounts that had been banned by previous management for spreading misinformation and/or hate. 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment