Are modern LLMs closer to AGI or next word predictor? Where do they fall in this graph with 10 on x-axis being human intelligence.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.
Also not sure if this graph is right way to visualize it.
Intelligence is a measure of reasoning ability. LLMs do not reason at all, and therefore cannot be categorized in terms of intelligence at all.
LLMs have been engineered such that they can generally produce content that bears a resemblance to products of reason, but the process by which that's accomplished is a purely statistical one with zero awareness of the ideas communicated by the words they generate and therefore is not and cannot be reason. Reason is and will remain impossible at least until an AI possesses an understanding of the ideas represented by the words it generates.
There's a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they'd require. I don't know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.
i think the first question to ask of this graph is, if "human intelligence" is 10, what is 9? how you even begin to approach the problem of reducing the concept of intelligence to a one-dimensional line?
the same applies to the y-axis here. how is something "more" or "less" of a word predictor? LLMs are word predictors. that is their entire point. so are markov chains. are LLMs better word predictors than markov chains? yes, undoubtedly. are they more of a word predictor? um...
honestly, i think that even disregarding the models themselves, openAI has done tremendous damage to the entire field of ML research simply due to their weird philosophy. the e/acc stuff makes them look like a cult, but it matches with the normie understanding of what AI is "supposed" to be and so it makes it really hard to talk about the actual capabilities of ML systems. i prefer to use the term "applied statistics" when giving intros to AI now because the mind-well is already well and truly poisoned.
They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.
Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.
Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI
I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it's still just stasticial models that can easily be fooled.
Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.
There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.
Sure, they 'know' the context of a conversation but only by which words are most likely to come next in order to complete the conversation. That's all they're trained to do. Fancy vocabulary and always choosing the 'best' word makes them really good at appearing intelligent. Exactly like a Sales Rep who's never used a product but knows all the buzzwords.
Are you interested in this from a philosophical perspective or from a practical perspective?
From a philosophical perspective:
It depends on what you mean by "intelligent". People have been thinking about this for millennia and have come up with different answers. Pick your preference.
From a practical perspective:
This is where it gets interesting. I don't think we'll have a moment where we say "ok now the machine is intelligent". Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don't like to hear it due to the fear of unemployedness and such, but I think that's a realistic outcome.
I'll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined...
But they're just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there's no logic or reasoning in their execution or interaction with them. It's one of the reasons they're so terrible at math.
They're not incompatible, although I think it unlikely AGI will be an LLM. They are all next word predictors, incredibly complex ones, but that doesn't mean they're not intelligent. Just as your brain is just a bunch of neurons sending signals to each other, but it's still (presumably) intelligent.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.
They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.
The way I would classify it is if you could somehow extract the "creative writing center" from a human brain, you'd have something comparable to to a LLM. But they lack all the other bits, and reason and learning and memory, or badly imitate them.
If you were to combine multiple AI algorithms similar in power to LLM but designed to do math, logic and reason, and then add some kind of memory, you probably get much further towards AGI. I do not believe we're as far from this as people want to believe, and think that sentience is on a scale.
But it would still not be anchored to reality without some control over a camera and the ability to see and experience reality for itself. Even then it wouldn't understand empathy as anything but an abstract concept.
My guess is that eventually we'll create a kind of "AGI compiler" with a prompt to describe what kind of mind you want to create, and the AI compiler generates it. A kind of "nursing AI". Hopefully it's not about profit, but a prompt about it learning to be friends with humans and genuinely enjoy their company and love us.
Imo, which is backed a bit by some pretty new studies, not only do LLMs not have intelligence at all, they are incapable of it.
Human intelligence and conciousness likely has a lot to do with nanotubes that trigger quantum wave function collapse, and allow for decision making. Computers simply do not function in this way. Computers are processing machines. They have logic gates with 2 states. 101101110011 binary logic.
If new studies related to nanotubes are right biological brains are simply operating on an entirely diffetent level and playing by a different set of rules than computers. Its not a issue of getting the software right, or getting more processing power. Its an issue of physical capability of the machine to perform certain functions.
I'm going to say x=7, y=10. The sum x+y is not 10, because choosing the next word accurately in a complex passage is hard. The x is 7, just based on my gut guess about how smart they are - by different empirical measures it could be 2 or 40.
I hold a very strong hypothesis, which I’ve not seen any data contradict yet, that intelligence is only possible with formal language and symbolics and therefore formal language and intelligence is very hard to separate. I don’t think one created the other; they evolved together.