We never called if statements AI until the last year or so. It's all marketing buzz words. It has to be more than just "it makes a decision" to be AI, or else rivers would be AI because they "make a decision" on which path to take to the ocean based on which dirt is in the way.
Yeah, and highlighting that difference is what is important right now.
This is the first AI to masquerade as general artificial intelligence and people are getting confused.
This current thing doesn't have or need rights or ethics. It can't produce new intellectual property. It's not going to save Timmy when he falls into the well. We're going to need a new Timmy before all this is over
What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".
But I take your point. This stuff will continue to advance.
But the important argument today isn't over what it can be, it's an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they're also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can't yet take a useful ethical stand, and so we need to not rely on then that way, if we don't want things to go really badly.
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don't have mastery of language and they cannot predict desired output
Many languages lack words for certain concepts. For example, english lacks a word for the joy you feel at another's pain. You have to go to Germany in order to name Schadenfreude. However, English is perfectly capable of describing what schadenfreude is. I sometimes become nonverbal due to my autism. In the moment, there is no way I could possibly describe what I am feeling. But that is a limitation of my temporarily panicked mind, not a limitation of language itself. Sufficiently gifted writers and poets have described things once thought indescribable. I believe language can describe anything with a book long enough and a writer skilled enough.
I thought this was an inciteful comment. Language is a kind of 'view' (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It's a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level