This happened to me. I was really really into AI when nobody even knew what it meant if not for hal, skynet and matrix, and now everybody talks of llms like they even know what the f they are.
Nah, nobody talks about LLMs. If I approached an average, everyday person about this topic, 99% of them wouldn't know shit about it, while the tech-nerds all would.
It's not mainstream at alllll yet. I introduced a pair of people I game with to openai/gpt3.5 like...a week ago and they were absolutely beside themselves using it.
I know some people doing old-school logic-based AI research. They're happy because there's more AI funding in general, and they can present themselves as "what neural networks are missing" or "the next big thing". Or they come up with projects involving hybrid systems.
Symbolic AI? Pretty sure a combo of that and ML would be needed. Pure ML is too unreliable and have limited coherence, and nobody knows how to program useful symbolic AI from scratch. But if you combine them they can cover each other's weak spots.
That's unlikely. What's more likely is that you were not yet exposed to AI research and did not read through the academic reviews and articles of the time. AI is a serious topic in science and engineering since more than half a century.
I was reading papers daily, and there was progress but even in the field of symbolic ai the focus was on weak ai, a range of approaches that try to solve single problems. They were trying to find marketable techniques, not looking for the sparkle of intelligence. Then big data came and people started specialising in techniques that were also useful for ml, and boom.
I remember when Google started running classifiers backwards for the first time to produce the very first generation of generative ML. Very small crowd following it closely.