Techno Blender
Digitally Yours.

Why fake news is huge on social media

0 30



The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.

In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.

“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.

Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.

“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”

For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.

“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.

The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.

“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)

Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.

It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”

Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.

The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.

“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.

Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.

And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.

“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”





The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.

In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.

“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.

Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.

“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”

For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.

“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.

The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.

“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)

Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.

It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”

Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.

The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.

“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.

Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.

And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.

“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment