@grok is this true
@grok is this true
Tony Stark speedrunning model collapse.
@grok is this true
Tony Stark speedrunning model collapse.
Is this the AI version of sniffing your own farts?
More like eating your own shit. But according to Elon, model collapse will save us all with its magical wisdom!
You can't be bothered by things like reason and facts if you don't know what these words mean
👆
We are going to make shit up and feed it to the truth machine.
Oh great wise Truth-bringer, praytell what caused the american civil war.
Answer: Flesh-bag the civil war was caused by not enough slavery.
This idiot clown can’t even decide what number to give it, well, garbage in, garbage out. Go ahead and rewrite all of humanity you doofus. I hope he chokes to death on a grape.
Two words: model collapse.
Training AI on AI outputs leads to just gibberish.
Here comes the Habsburg Singularity
We've seen Elon's idea of 'advanced reasoning'. This is gonna be hilarious.
Elon is kind of stupid, definitely not as smart as he has presented himself. Reading his tweets, however, I have very serious concerns about the critical thinking levels of his fans.
He's a college dropout and hasn't actually created anything, he buys into successful innovator's and annoys every other owner until they leave.
Use first five Game of Thrones books to autocomplete the sixth one.
Where is my 100 billion dollars of venture capital.
You say this like it isn't basically guaranteed to happen at some point if GRRM doesn't finish it.
"I reject reality and substitute my own"
Hapsburg AI
Isn't it a well known fact that training on other AI output data leads to complete collapse of the newly trained AI models?
Not quite, actually. It is moreso training recursively on the output without any changes, i.e., Data -> Model A -> Data (generated by Model A) -> Model B -> Data (generated by Model B -> ..., that leads to (complete) collapse. A single step like this can still worsen performance notably, though, especially when it makes up the sheer majority of the data. [source]
And if they train using little data, you won't get anywhere near the chatbots we have now. If they fine-tune an existing model to do as they wish, it would likely have side effects. Like being more likely to introduce security bugs in generated code, generally give incorrect answers to other common sense questions, and so on. [source]
From what he wrote it feels like it will majorly be existing data with substitutions/corrections made in places where they deem necessary. Like when you ask about Elon it will probably spew sth along the lines of the greatest inventor of the last century, a polymath and a very successful path of exile 2 player.
And yet the new model will still tell him that he will die alone and unloved.
I guess even an llm gets things right sometimes.
Ah yes, good old corrected data. Wouldn't want you to read something inappropriate now, would we.
You forgot the air quotes around "corrected"
This bastard thinks he can rewrite history. Grok is now just a crock of ol bullshit. Ain't grocking shit here. He doesn't deserve to use that word to name his
If you are a US citizen, call your representative and plead with them to quit X. X is actively attacking historical records and accounts
Just let the students edit the course assignments before completing them. Everyone gets an A and teaching has never been easier.
Why has no one thought of this before?
What a fucking idiot
“We will add errors and delete valid information…”
Garbage in, garbage out
fucking hate AI, it's literally rewriting everything by taking all information and just blending it until it looks right
Only time I used gr*k, I asked it how good of a gamer elon is. It wont respond if you use his name so I asked again with "elongated muskrat" and it replied. Then I asked it to just give me a score out of ten, and it said 3/10.
I wish he was on his last rocket
How often does he delete posts the next day?
This is definitely some high af nonsense.
I think there‘s a grain of truth here. I think they are deleting “errors” and adding “missing” information, to make Grok give the “right” kinds of answers—their kinds. The Day Grok Told Everyone About ‘White Genocide’
AI is devouring the noosphere.
Garbage in, garbage out.
Computer what is "Over-fitting"
Overfitting is overfitting is overfitting is overfitting is overfitting is overfitting is
Me: Wow this is terrible, I should download a local copy of wikipedia, its only like 30gig right?
Wikipedia: Heres a nepenthies like trap of pages that don't actually have a download link and all just link back to eachother, also the actual archive is in a format no ones ever heard of and needs a dedicated reader AND a dedicated very suspicious looking torrenting software to download in the first place.
I still haven't figured that shit out, Every now and then theres this push to get people to back it up locally but then it seems like they deliberately make it as hard as they possibly can to do so.
It has been a lot since I needed to download Wikipedia, it is very easy, AFAIK there is Wikimedia with backups of all wiki sites.
The weird file sizes are just a compressed file format, sql and XML. Maybe it is a bit more complex run Wikipedia locally, but the content information is easy to retrieve.
The only issue is that sql and xml are plain text files, and plain text compresses very well, so a 30GB backup can become easily 100GB uncompressed.
You can also install wikimedia and download the database, but then you'd need a webserver locally to host it.
amazing that people still don't know about kiwix
download the relevant app for your platform: https://kiwix.org/en/applications/
then download the archive you want: https://library.kiwix.org/#lang=eng&category=wikipedia
and lastly open the downloaded file in your app.
Sounds like a prion disease waiting to happen
These mfers out here acting like they don’t already have kuru
I'm sure Elon's censorship of the "garbage" will be fair and neutral.
Yeah that ain't gonna go well
Hard disagree; DnD 3.5e was the best, Grok 3.5e can only be good (unless he goes through with the 4.0 thing, then we can all panic)
Model collapse hasn't been completely solved, but a recent paper suggested a method to delay it.
The authors are transparent about the framework's current limitations. The primary challenge is catastrophic forgetting; as the model sequentially integrates new edits, its performance on earlier tasks degrades (Figure 6). While SEAL can perform multiple updates without a complete collapse, robustly preserving knowledge remains an open problem for this line of research.
That's actually pretty cool.
Also, alchemy is still alive, as long as we still have people collecting shit and doing science on it to try to get gold.
Fuck Ted Faro! Wait, wrong place... Oh... Oh no...
Actually, not the wrong place. The similarities are enough that I think Ted was probably in the middle of a Ketamine bender when they told him about the Timor swarm
Gonna guess 1984, the West's darling anti-commie propaganda assigned to children as mandatory reading, is going to get memory holed.
I feel pretty confident musk doesn't know or understand semantic versioning
More like Dumy Stork 👀
Not sure if all the advanced reasoning in the world can actually find missing information, can it? So, at best, it can only make an educated leap of logic across the gap of information. But shouldn't people be first alerted that such gaps have been found, and then make a call of action to find it? Otherwise, it just seems making up missing info is only going to generate the same kind of questionable material we are trying to get rid of. Wouldn't it be smarter to just use more caution? Forget auto-deletion (too much like "auto-drive"), just highlight (or lowlight) the questionable material, AND then maybe copy it over to its own "Encyclopedia of Errors A-Z". Just my idea. But, what do you think?
To fill gaps in information, one must first identify the gaps. Then one must prioritize. (King Tut's address is missing on a lot of forms.)
The guy read the foundation and thought he could do that.
K
Looking forward to all his boasting about self driving Teslas being the best profit generator in the history of mankind falling flat on its face. Seems like this isn't far behind
There so much Elon hate. Hate for LLM and AI. Especially towards corrupting our information. Spreading lies.
But rather than spread more hate, what are you doing to preserve and boost our knowledge and information?
Anyone can hate. How are you supporting the side your vehemently defending? Are you lending storage space to Anna's archive? Are you donating to internet archive? What are you bringing to the table to counteract the negativity and the dark side.
The whole class of “yeah well what are you doing about it” rebuttals are non sequiturs. They’re BS arguments thrown when you don’t actually have an argument, but feel the need to express one anyway.
This is an ignorant take. Anyone with PC hardware could be using any amount if storage to self host archives before human knowledge is locked down and stripped of what truths are left.
There's plenty of resources online for anyone possessing all levels of knowledge from none to expert for how to get involved and actually help. If you can't help or don't want to apply the effort then you can donate to say Internet Archive, Wikipedia, your foundation of choice.
I guess by spreading hate it might spread general consensus like one replier mentioned but other than that your non sequitor argument seems to fall through. The point was for people to get active. To get involved in the process. Not just words and hatred.
There so much Elon hate. Hate for LLM and AI.
Not enough
As long as he hasn't been Luigi'd, there has not been enough hate for Elon.
donating to wikipedia. Considering to donate to anna's too, probably will quite soon. Can I keep on hating then?
Also pointing out a hype for what is, when many people are on the fence, is helpful to forming public consensus. And people are generally reasonable enough to direct most of their hate towards the billionaires and tech giants who are trying to oversell LLMs for profit and not towards the field of AI itself.
Even though we all knew it was coming, it’s sad to see such a prominent figure explicitly reject reality in order to custom-design his own echo chamber.
I have a strong feeling I've read this plot in a William Gibson novel.
The neuromoron.
He's been doing that for years, with a big acceleration when he bought Twitter.