OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
So anyone who creates something remotely similar to something online is plagiarizing, got it.
Folks, that’s how we all do things - we read stuff, we observe conversations, we look at art, we listen to music, and what we create is a synthesis of our experiences.
Yes, it is possible for AI to plagiarize, but that needs to be evaluated on a case by case basis, just as it is for humans.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
Is it stealing to learn how to draw by referencing other artists online? That's how these training algorithms work.
I agree that we need to keep this technology from widening the wealth gap, but these lawsuits seem to fundamentally misunderstand how training an AI model works.
No, it isn’t original. Output of AI is just reorganized content that it already has seen.
AI doesn’t learn, it doesn’t create derivative works. It’s nothing more than reshuffling what it’s already seen, to the point that it will frequently use phrases pulled directly from training data.
You are saying that it isn’t original content because AI can’t be original. I’m saying if the content isn’t distinguishable from original content, and can’t be directly traced to the source, in what way is it not original?
I think you hear a lot of college students say the same thing about their original work.
What I need to see is output from an AI, and the original content side by side and say “yeah, the AI ripped this off”. If you can’t do that, then the AI is effectively emulating human learning.
I think you hear a lot of college students say the same thing about their original work.
What I need to see is output from an AI, and the original content side by side and say “yeah, the AI ripped this off”. If you can’t do that, then the AI is effectively emulating human learning.
It’s a well established problem. Tech companies have explicitly told employees to not use these services on company hardware or servers. The data is not abstracted from the user and it’s been proven to output data that’s been inputted.