Meta’s new content policies risk fueling more mass violence and genocide
Meta’s new content policies risk fueling more mass violence and genocide
Recent content policy changes by Meta pose a grave threat to vulnerable communities globally and drastically increase the risk of violence.
Recent content policy announcements by Meta pose a grave threat to vulnerable communities globally and drastically increase the risk that the company will yet again contribute to mass violence and gross human rights abuses – just like it did in Myanmar in 2017. The company’s significant contribution to the atrocities suffered by the Rohingya people is the subject of a new whistleblower complaint that has just been filed with the Securities and Exchange Commission (SEC).
On January 7, founder and CEO Mark Zuckerberg announced a raft of changes to Meta’s content policies, seemingly aimed at currying favor with the new Trump administration. These include the lifting of prohibitions on previously banned speech, such as the denigration and harassment of racialized minorities. Zuckerberg also announced a drastic shift in content moderation practices – with automated content moderation being significantly rolled back. While these changes have been initially implemented in the US, Meta has signaled that they may be rolled out internationally. This shift marks a clear retreat from the company’s previously stated commitments to responsible content governance.