It's because of the way an LLM works, they're completely blind to things like what a word starts with. Ask it something like "List 10 words that start and end with the same letter but are not palindromes." and it completely shits the bed, because it can only process words as unified tokens, it can't look inside the words to see how they're structured.
They don't process words as unified tokens for something like an LLM, but they do process them as multi-letter encoding, like byte-pair encoding or more advanced techniques.