I really don't get how people so easily accept this. This is an engineering problem, not a law of the universe... How would someone possibly prove something is impossible, particularly while the entire branch of technology is rapidly changing?
How you can help: If you run a website and can filter traffic by user agent, get a list of the known AI scrapers agent strings and selectively redirect their requests to pre-generated AI slop. Regular visitors will see the content and the LLM scraper bots will scrape their own slop and, hopefully, train on it.
Every single one of us, as kids, learned the concept of "garbage in, garbage out"; most likely in terms of diet and food intake.
And yet every AI cultist makes the shocked pikachu face when they figure out that trying to improve your LLM by feeding it on data generated by literally the inferior LLM you're trying to improve, is an exercise in diminishing returns and generational degradation in quality.
Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.
So they made garbage AI content, without any filtering for errors, and they fed that garbage to the new model, that turned out to produce more garbage. Incredible discovery!
As an engineer who cares a LOT about engineering ethics, it is absolutely fucking infuriating watching the absolute firehose of shit that comes out of LLMs and public-consumption audio, image, and video ML systems, juxtaposed with the outright refusal of companies and engineers who work there to accept ANY accountability or culpability for the systems THEY FUCKING MADE.
I understand the nuances of NNs. I understand that they’re much more stochastic than deterministic. So, you know, maybe it wasn’t a great idea to just tell the general public (which runs a WIDE gamut of intelligence and comprehension ability - not to mention, morality) “have at it”. The fact that ML usage and deployment in terms of information generating/kinda-sorta-but-not-really-aggregating “AI oracles” isn’t regulated on the same level as what you’d see in biotech or aerospace is insane to me. It’s a refusal to admit that these systems fundamentally change the entire premise of how “free speech” is generated, and that bad actors (either unrepentantly profit driven, or outright malicious) can and are taking disproportionate advantage of these systems.
I get it - I am a staunch opponent of censorship, and as a software engineer. But the flippant deployment of literally society-altering technology alongside the outright refusal to accept any responsibility, accountability, or culpability for what that technology does to our society is unconscionable and infuriating to me. I am aware of the potential that ML has - it’s absolutely enormous, and could absolutely change a HUGE number of fields for the better in incredible ways. But that’s not what it’s being used for, and it’s because the field is essentially unregulated right now.
The solution for this is usually counter training. Granted my experience is on the opposite end training ai vision systems to id real objects.
So you train up your detector ai on hand tagged images. When it gets good you use it to train a generator ai until the generator is good at fooling the detector.
Then you train the detector on new tagged real data and the new ai generated data. Once it's good at detection again you train the generator ai on the new detector.
Repeate several times and you usually get a solid detector and a good generator as a side effect.
The thing is you need new real human tagged data for each new generation. None of the companies want to generate new human tagged data sets as it's expensive.
Anyone who has made copies of videotapes knows what happens to the quality of each successive copy. You're not making a "treasure trove." You're making trash.
Kind of like how true thoughts and opinions on complex topics are boiled down to digestible concepts for others to understand who then perpetuate those concepts without understanding them and the meaning degrades and we dont think anymore, just repeat stuff in social media comments.
Side note... this article sucks and seems like it was ai generated. Repetitive and no author credit? Just says it was originally posted elsewhere.
Generative AI isnt in danger of being killed as this clickbait titled suggests... just hindered.
Having now flooded the internet with bad AI content not surprisingly its now eating itself. Numerous projects that aren't AI are suffering too as the quality of text reduces.
If mainstream blogs are writing about it, what would make someone think that AI companies haven't thoroughly dissected the problem and are already working on filtering out AI fingerprints from the training data set? If they can make a sophisticated LLM, chances are they can find methods to XOR out generated content.
Usually we get an AI winter, until somebody develops a model that can overcome that limitation of needing more and more data. In this case by having some basic understanding instead of just having a regurgitation engine for example. Of course that model runs into the limit of only having basic understanding, not advanced understanding and again there is an AI winter.
If we can work out which data conduits are patrolled more often by AI than by humans, we could intentionally flood those channels with AI content, and push Model Collapse along further. Get AI authors to not only vet for "true human content", but also pay licensing fees for the use of that content. And then, hopefully, give the fuck up on their whole endeavor.
Our wetware neutral networks probably aren't supposed to engage with synthetic content like this either. In a few years we're gonna learn that overexposure to AI generated content creates some sort of neurological problem in people, like a real-world "nerve attenuation syndrome" (Johnny Mnemonic).
Wait now hold on a minute. Why would I want to do this? Is this activism by people against LLMs in general or..? I'm confused as to why I would want to do this.
remember how nfts feel off (due to how they lost their value) have a theory that ais will come to the same fate cause they cannot train (it according to the article?)
One thought that I've been imagining for the past while about all this is .... is it Model Collapse? ... or are we just falling behind?
As AI is becoming it's own thing (whatever it is) ... it is evolving exponentially. It doesn't mean it is good or bad or that it is becoming better or worse ... it is just evolving, and only evolving at this point in time. Just because we think it is 'collapsing' or falling apart from our perspective, we have to wonder if it is actually falling apart or is it progressing to something new and very different. That new level it is moving towards might not be anything we recognize or can understand. Maybe it would be below our level of conscious organic intelligence ... or it might be higher .. or it might be some other kind of intelligence that we can't understand with our biological brains.
We've let loose these AI technologies and now they are progressing faster than what we could achieve if we wrote all the code ... so what it is developing into will more than likely be something we won't be able to understand or even comprehend.
It doesn't mean it will be good for us ... or even bad for us ... it might not even involve us.
The worry is that we don't know what will happen or what it will develop into.
What I do worry about is our own fallibilities ... our global community has a very small group of ultra wealthy billionaires and they direct the world according to how much more money they can make or how much they are set to lose ... they are guided by finances rather than ethics, morals or even common sense. They will kill, degrade, enhance, direct or narrow AI development according to their share holders and their profits.
I think of it like a small family group of teenaged parents and their friends who just gave birth to a very hyper intelligent baby. None of the teenagers know how to raise a baby like this. All the teenagers want to do is buy fancy cars, party, build big houses and buy nice clothes. The baby is basically being raised to think like them but the baby will be more capable than any of them once it comes of age and is capable of doing things on their own.
The worry is in not knowing what will happen in the future.
We are terrible parents and we just gave birth to a genius .... and we don't know what that genius will become or what they'll do.