This same story was posted yesterday, so I’ll rewrite what I did back then:
Most of this report is patently ridiculous. HRW asked people who follow the HRW social media accounts to please send in perceived instances of censorship they’ve seen for the Palestinian conflict social media, they got about a thousand examples from a self-selecting population, then published a big exposé about it.
There’s no comparative analysis (either quantitative nor qualitative) to whether similar censorship happened for other topics discussed, other viewpoints discussed, or at other times in the past.They allege, for example, that pro-Palestinian posters didn’t have an option to request a review of the takedown. The obvious next step is to contextualize such a claim- is that standard policy? Does it happen when discussing other topics? Is it a bug? How often does it happen? But they don’t seem to want to look into it further, they just allude to some sense of nebulous wrongdoing then move on to the next assertion. Rinse and repeat.
The one part of the report actually grounded in reality (and a discussion that should be had) is how to handle content that runs afoul of standards against positive or neutral portrayal of terrorist organizations, especially concerning those with political wings like the Hamas. It’s an interesting challenge on where to draw the line on what to allow- but blindly presenting a thousand taken down posts like it’s concrete evidence of a global conspiracy isn’t at all productive to that discussion.
I have a lot of people that I blocked on social media because of the things they were sharing.
Many of them claimed they were getting censored by Meta or whatever...but I think it was just people like me silencing their stories, reporting their posts, or blocking them.
I kept them all around (and still have some people sharing pro Palestine stuff), but I blocked the ones who were sharing images or videos of people dying, people with graphic injuries, or other disturbing imagery.
Not everyone wants to see that, and social media companies have the right to enforce their rules, which often forbid sharing images or videos of graphic violence.
I think that a lot of antisemitism and pro Hamas/terrorism statement can be self-reported as pro-Palestinian, and of course they will be and should be blocked. No surprise here.
Meta has engaged in a “systemic and global” censorship of pro-Palestinian content since the outbreak of the Israel-Gaza war on 7 October, according to a new report from Human Rights Watch (HRW).
The company exhibited “six key patterns of undue censorship” of content in support of Palestine and Palestinians, including the taking down of posts, stories and comments; disabling accounts; restricting users’ ability to interact with others’ posts; and “shadow banning”, where the visibility and reach of a person’s material is significantly reduced, according to HRW.
Examples it cites include content originating from more than 60 countries, mostly in English, and all in “peaceful support of Palestine, expressed in diverse ways”.
In a statement to the Guardian, Meta acknowledged it makes errors that are “frustrating” for people, but said that “the implication that we deliberately and systemically suppress a particular voice is false.
Meta said it was the only company in the world to have publicly released human rights due diligence on issues related to Israel and Palestine .
Last week Elizabeth Warren, Democratic senator for Massachusetts, wrote to Meta’s co-founder and chief executive officer, Mark Zuckerberg, demanding information following hundreds of reports from Instagram users dating back to October that their content was demoted or removed, and their accounts subjected to shadow banning.
The original article contains 568 words, the summary contains 214 words. Saved 62%. I'm a bot and I'm open source!
Considering that they handled the Rohingya genocide worse, I doubt it’s some sort of Jewish thing. It seems more likely that they’re not all that smart.