Indeed, the potential consequences are concerning. Addiction to AI-driven technologies is a real issue, especially when they become ubiquitous. Utilizing local AI for mental health support could indeed mitigate some risks. However, we must proceed cautiously to navigate the complexities of integrating AI into our daily lives. And yes, envisioning a world where everyone converses like language models is both fascinating and slightly eerie.
I kind of wonder what the character.ai privacy policy is for all these conversations.
I would vaguely imagine that a number of people might not want a full log of their conversations with their psychologist and/or friend to leak or have information extracted from it for arbitrary purposes.
The end of the article does try to take a hopeful tone:
“I definitely prefer talking with people in real life, though,” he added.
I don't necessarily agree with everything though:
While some of the culture around Character.AI is concerning, it also mimics the internet activity of previous generations who, for the most part, have turned out just fine.