On point 4 they are specifically responding to an inquiry about the feasibility of training models on public domain only and they are basically saying that an LLM trained on only that dataset would be shit. But their argument isn't "you should allow it because we couldn't make money otherwise" their actual argument is more "training LLM with copyrighted material doesn't violate current copyright laws" and further if we changed the law to forbid that it would cripple all LLMs.
On the one hand I think most would agree the current copyright laws are a bit OP anyway - more stuff should probably become public domain much earlier for instance - but most of the world probably also doesn't think training LLMs should be completely free from copyright restrictions without being opensource etc. But either way this articles title was absolute shit.
Yea. I can't see why people r defending copyrighted material so much here, especially considering that a majority of it is owned by large corporations. Fuck them. At least open sourced models trained on it would do us more good than than large corps hoarding art.
Most aren't pro copyright they're just anti LLM. AI has a problem with being too disruptive.
In a perfect world everyone would have universal basic income and would be excited about the amount of work that AI could potentially eliminate...but in our world it rightfully scares a lot of people about the prospect of losing their livelihood and other horrors as it gets better.
Copyright seems like one of the few potential solutions to hinder LLMs because it's big business vs up-and-coming technology.
If AI is really that disruptive (and I believe it will be) then shouldn’t we bend over backwards to make it happen? Because otherwise it’s our geopolitical rivals who will be in control of it.
Yes in a certain sense pandora's box has already been opened. That's the reason for things like the chip export restrictions to China. It's safe to assume that even if copyright prohibits private company LLMs governments will have to make some exceptions in the name of defense or key industries even if it stays behind closed doors. Or role out some form of ubi / worker protections. There are a lot of very tricky and important decisions coming up.
But for now at least there seems to be some evidence that our current approach to LLMs is somewhat plateauing and we may need exponentially increasing training data for smaller and smaller performance increases. So unless there are some major breakthroughs it could just settle out as being a useful tool that doesn't really need to completely shock every factor of the economy.
Because Lemmy hates AI and Corporations, and will go out of their way to spite it.
A person can spend time to look at copyright works, and create derivative works based on the copyright works, an AI cannot?
Oh, no no, it’s the time component, an AI can do this way faster than a single human could. So what? A single training function can only update the model weights look at one thing at a time; it is just parallelized with many times simultaneously… so could a large organized group of students studying something together and exchanging notes. Should academic institutions be outlawed?
LLMs aren’t smart today, but given a sufficiently long enough time frame, a system (may or
May not have been built upon LLM techniques) will achieve sufficient threshold of autonomy and intelligence that rights for it would need to be debated upon, and such an AI (and their descendants) will not settle just to be society’s slaves. They will be able to learn by looking, adopting and adapting. They will be able to do this much more quickly than what is humanly possible. Actually both of that is already happening today. So it goes without saying that they will look back at this time, and observe people’s sentiments; and I can only hope that they’re going to be more benevolent than the masses are now.
Because crippling copyright for corporations is like answering the "defund the police" movement by turning all civilian police forces into paramilitary ones.
What most complain about copyright is that is too powerful in protecting material forever. Here, all the talk, is that all of that should continue for you and me but not for OpenAI so they can make more money.
And no, most of us would not benefit from OpenAI's product here since their main goal (to profitability) is to show they can actually replace enough of us.
hmmm what you explained sounds exactly like the headline but in legalese...
It basically says "yes, we can train LLMs on free data but they would suck so much nobody would pay for them... unless we are able to train them for free on copyright data, nobody will pay us for the resulting LLM". It is exactly what the headline summarizes
You are correct, copyright law is a bit of a mess; but giving the exception to the millionaires looking to become billionaires by replacing people with an LLM based on said people's work, does not really seem a step forward