‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again
‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again

‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again

‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again
‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again
I did something with Perplexity as a test. I asked it a complicated question (which it botched because despite being "search-driven" it searches like a grandma using Google for the first time, and I mean the current slop-based Google). After giving it more information to finally get the right topic, I started asking questions designed to elicit a conclusion. Which it gave. And it gives you the little box saying what steps it's supposedly following while it works.
Then I asked it to describe the processes it used to reach its conclusion.
Guess which of these occurred:
Edited to add:
I was out of the number of "advanced searches" I'm allowed on the free tier, so I did this manually.
Here is a conversation illustrating what I'm talking about.
Note that I asked it twice directly, and once indirectly, to explain its thinking processes. Also note that:
"Reasoning" AI is absolutely lying and absolutely hallucinating even its own processes. It can't be trusted any more than autocorrect. It cannot understand anything which means it cannot reason.