Mastodon is a piece of software. I don't see anyone saying "phpBB" or "WordPress" has a massive child abuse material problem.
Has anyone in the history ever said "Not a good look for phpBB"? No. Why? Because it would make no sense whatsoever.
I feel kind of a loss for words because how obvious it should be. It's like saying "paper is being used for illegal material. Not a good look for paper."
What is the solution to someone hosting illegal material on an nginx server? You report it to the authorities. You want to automate it? Go ahead and crawl the web for illegal material and generate automated reports. Though you'll probably be the first to end up in prison.
These articles are written by idiots, serving the whims of a corporate stooge to try and smear any other than corporate services and it isn't even thinly veiled. Look at who this all comes from
Seems odd that they mention Mastodon as a Twitter alternative in this article, but do not make any mention of the fact that Twitter is also rife with these problems, more so as they lose employees and therefore moderation capabilities. These problems have been around on Twitter for far longer, and not nearly enough has been done.
After reading it, I’m still unsure what all they consider to be CSAM and how much of each category they found. Here are what they count as CSAM categories as far as I can tell. No idea how much the categories overlap, and therefore no idea how many beyond the 112 PhotoDNA images are of actual children.
112 instances of known CSAM of actual children, (identified by PhotoDNA)
713 times assumed CSAM, based on hashtags.
1,217 text posts talking about stuff related to grooming/trading. Includes no actual CSAM or CSAM trading/selling on Mastodon, but some links to other sites?
Drawn and Computer-Generated images. (No quantity given, possibly not counted? Part of the 713 posts above?)
Self-Generated CSAM. (Example is someone literally selling pics of their dick for Robux.) (No quantity given here either.)
Personally, I’m not sure what the take-away is supposed to be from this. It’s impossible to moderate all the user-generated content quickly. This is not a Fediverse issue. The same is true for Mastodon, Twitter, Reddit and all the other big content-generating sites. It’s a hard problem to solve. Known CSAM being deleted within hours is already pretty good, imho.
Meta-discussion especially is hard to police. Based on the report, it seems that most CP-material by mass is traded using other services (chat rooms).
For me, there’s a huge difference between actual children being directly exploited and virtual depictions of fictional children. Personally, I consider it the same as any other fetish-images which would be illegal with actual humans (guro/vore/bestiality/rape etc etc).
“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,”
How do you have "probably" and "it's not even close" in the same sentence?
Here's the thing, and what I've been saying for a long time about The Fediverse:
I don't care what platform you have, if it is sufficiently popular, you're GOING to have CSAM. You're going to have alt-right assholes. You're going to have transphobia, you're going to have racism and every other kind of discrimination.
People point fingers at Meta for "allowing" this but there's no amount of money that can reasonably moderate 3 b-b-billion users. Meta, and probably every other platform that's not Twitter or False social, does what they can about this.
Masto and Fedi admins need to be cognizant of the amount of users on their instances and need to have a sufficient number of moderators to manage those users. If they don't have them, they need to close registrations.
But ultimately the Fediverse can also create safe-havens for these sorts of things. Making it easy to set up a discriminatory network that has no outside moderation. This is the downside of free speech.
Nothing you can do except go after server owners like usual. Has nothing to do with the fedi. Mastodon has nothing to do with either because anyone can pop up their own alternative server. This is one of many protocols they have or will use to distribute this stuff.
This just in: criminals are using the TCP protocol to distribute CP!!! What can the internet do to stop this? Oh yeah, go after server owners and groups like usual.
This is one of the reasons I'm hesitant to start my own instance - the moderation load expands exponentially as you scale, and without some sort of automated tool to keep CSAM content from being posted in the first place, I can only see the problem increasing. I'm curious to see if anyone knows of lemmy or mastodon moderation tools that could help here.
That being said, it's worth noting that the same Standford research team reviewed Twitter and found the same dynamic in play, so this isn't a problem unique to Mastodon. The ugly thing is that Twitter has (or had) a team to deal with this, and yet:
“The investigation discovered problems with Twitter's CSAM detection mechanisms and we reported this issue to NCMEC in April, but the problem continued,” says the team. “Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.”
Research such as this is about to become far harder—or at any rate far more expensive—following Elon Musk’s decision to start charging $42,000 per month for its previously free API. The Stanford Internet Observatory, indeed, has recently been forced to stop using the enterprise-level of the tool; the free version is said to provide read-only access, and there are concerns that researchers will be forced to delete data that was previously collected under agreement.
So going forward, such comparisons will be impossible because Twitter has locked down its API. So yes, the Fediverse has a problem, the same one Twitter has, but Twitter is actively ignoring it while reducing transparency into future moderation.
I'm not actually going to read all that, but I'm going to take a few guesses that I'm quite sure are going to be correct.
First, I don't think Mastodon has a "massive child abuse material" problem at all. I think it has, at best, a "racy Japanese style cartoon drawing" problem or, at worst, an "AI generated smut meant to look underage" problem. I'm also quite sure there are monsters operating in the shadows, dogwhistling and hashtagging to each other to find like minded people to set up private exchanges (or instances) for actual CSAM. This is no different than any other platform on the Internet, Mastodon or not. This is no different than the golden age of IRC. This is no different from Tor. This is no different than the USENET and BBS days. People use computers for nefarious shit.
All that having been said, I'm equally sure that this "research" claims that some algorithm has found "actual child porn" on Mastodon that has been verified by some "trusted third part(y|ies)" that may or may not be named. I'm also sure this "research" spends an inordinate amount of time pointing out the "shortcomings" of Mastodon (i.e. no built-in "features" that would allow corporations/governments to conduct what is essentially dragnet surveillance on traffic) and how this has to change "for the safety of the children."
So what im reading is they didnt actually look at any images, they found hashtags, undisclosed hashtags at that. So basically we've no idea what they think they found, for all we know cartoon might've been one of the tags
This seems like a very normal thing with all social media. Now if the server isn't banning and removing the content within a reasonable amount of time then we have major issues.
Seems like if you talk about Mastodon but not Twitter or Facebook in the same post it makes it feel like one is greater than the others. This article seems half banked to get clicks.
I know that people like to dump on Cloudflare, but it's incredibly easy to enable a built-in CSAM scanner with CloudFlare.
On that note, I'd like to see built-in moderation tools using something like PDQ and TMK+PDQF and a shared hashtable of CSAM and other material that may be outlawed or desirable to filter out in different regions (e.g. terrorist content, Nazi content in Germany, etc.)
The article points out that the strength of the Fediverse is also it’s downside. Federated moderation makes it challenging to consistently moderate CSAM.
We have seen it even here with the challenges of Lemmynsfw. In fact they have taken a stance that CSAM like images with of age models made to look underage is fine as long as there is some dodgy ‘age verification’
The idea is that abusive instances would get defederated, but I think we are going to find that inadequate to keep up without some sort of centralized reporting escalation and ai auto screening.