A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography.
A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.
Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.
It's hard to have a nuanced discussion because the article is so vague. It's not clear what he's specifically been charged with (beyond "obscenity," not a specific child abuse statute?). Because any simulated CSAM laws have been, to my knowledge, all struck down when challenged.
I completely get the "lock them all up and throw away the key" visceral reaction - I feel that too, for sure - but this is a much more difficult question. There are porn actors over 18 who look younger, do the laws outlaw them from work that would be legal for others who just look older? If AI was trained exclusively on those over-18 people, would outputs then not be CSAM even if the images produced features that looked under 18?
I'm at least all for a "fruit of the poisoned tree" theory - if AI model training data sets include actual CSAM then they can and should be made illegal. Deepfaking intentionally real under 18 people is also not black and white (looking again to the harm factor), but also I think it can be justifiably prohibited. I also think distribution of completely fake CSAM can be arguably outlawed (the situation here), since it's going to be impossible to tell AI from real imagery soon and allowing that would undermine enforcement of vital anti-real-CSAM laws.
The real hard case is producing and retaining fully fake people and without real CSAM in training data, solely locally (possession crimes). That's really tough. Because not only does it not directly hurt anyone in its creation, there's a possible benefit in that it diminishes the market for real CSAM (potentially saving unrelated children from the abuse flowing from that demand), and could also divert the impulse of the producer from preying on children around them due to unfulfilled desire.
Could, because I don't think there's studies that answers whether those are true.
This creates a significant legal issue - AI generated images have no age, nor is there consent.
The difference in appearance between age 16 and 18 is minimal, but the legal difference is immense. This is based entirely on a concept that cannot apply.
How do you define what's depicting a fictional child? Especially without including real adults? I've met people who believe that preferring a shaved pubic area is pedophilia. This is even though the vast majority of adult women do so. On the flip side, teenagers from the 70s and 80s would be mistaken for 40+ today.
Even the extremes aren't clear. Adult star "Little Lupe", who was 18+ in every single appearance, lacked most secondary sex characteristics. Experts testified in court that she could not possibly be an adult. Except she was, and there's full documentation to prove it. Would AI trained exclusively on her work be producing CSAM?
Do we know that AI child porn is bad? I could believe it would get them in the mood for the real thing and make them do it more, and I could believe it would make them go "ok, itch scratched", and tank the demand for the real stuff.
Depending on which way it goes, it could be massively helpful for protecting kids. I just don't have a sense for what the effect would be, and I've never seen any experts weigh in.
Could this be considered a harm reduction strategy?
Not that I think CSAM is good in any way, but if it saves a child would it be worthwhile? Like if these pedos were to use AI images instead of actual CSAM would that be any better?
I've read that CSAM sites on the dark web number into the hundreds of thousands. I just wonder if it would be a less harmful thing since it's such a problem.
If this thread (and others like it) have taught me aulnything is that facts be damned, people are opinionated either way. Nuance means nothing and it's basically impossible to have a proper discussion when it comes to wedge issues or anything that can be used to divide people. Even if every study 100% said Ai generated csam always led to a reduction in actual child harm and reduced recidivism and never needed any actual real children to be used as training material, the comments would still pretty much look the same. If the studies showed the exact opposite, the comments would also be the same. Welcome to the internet. I hope you brought aspirin.
To be clear, I am happy to see a pedo contained and isolated from society.
At the same time, this direction of law is something that I don't feel I have the sophistication to truly weigh in on, even though it invokes so many thoughts for me.
Show me multiple (let's say 3+) small-scale independent academic studies or 1-2 comprehensive & large academic studies that support one side or another and I may be swayed, Otherwise I think all that is being accomplished is that one guys life is getting completely ruined for now and potentially forever over some fabrications and as a result he may or may not get help, but I doubt he'll be better off.
—My understanding was that csam has it's legal status specifically because there are victims that are hurt by these crimes and possession supports a broader market that faciltates said harm to these victims. It's not as easy to make a morality argument (especially a good one) for laws that effect everybody when there are no known victims.
If no children were involved in the production of porn, how is it pedophilic? That's like claiming a picture of water has the same properties as water.
I must admit, amount of comments that are defending AI images as not child porn is truly shocking.
In my book, sexual images of children are not okay, AI generated or otherwise. Pedophiles need help, counseling and therapy. Not images that enable something I think is not acceptable in society.
I truly do believe that AI images should be subject to same standards as regular images in what content we deem appropriate or not.
Yes, this can be used to wrongfully prosecute innocent people, but it does not mean that we should freely allow AI-CP.
Edit, to those downvoting me and not reading the article:
A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.
"The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified," Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. "And that is a much harder problem to fix."