Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.
Clickbait titles suck
Something bizarre is happening to media organizations that use 'clicks' as a core metric.
I couldn’t be bothered to read the article, so I got ChatGPT to summarise it. Apparently there’s nothing to worry about.
You didn't, I did
The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.
Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.
You said:
Make a list of the main points and for each explain why it will be worse when it is society wide
ChatGPT said:
Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:
Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control): Why it's worse at scale: If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance. Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”: Why it's worse at scale: A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion. Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached: Why it's worse at scale: This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level. People stress out over subtle changes in ChatGPT’s behavior: Why it's worse at scale: If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion. Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being: Why it's worse at scale: If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction. Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons: Why it's worse at scale: This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience. Prolonged usage increases risk of emotional dependence, regardless of how it's used: Why it's worse at scale: As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
Let me know if you want this formatted for a presentation, policy paper, or discussion.
That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, emotional dependence will give AI's big tech company owners increased power over people.
It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
What's the point of a summary that's longer than the article itself?
You said: Take the same list again, for each point explain why it's actually beneficial and how it will get better with increased adoption
ChatGPT said:
Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:
Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control): Why it could be beneficial: If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly. Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”: Why it could be beneficial: For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention. Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached: Why it could be beneficial: If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health. People stress out over subtle changes in ChatGPT’s behavior: Why it could be beneficial: This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs. Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being: Why it could be beneficial: This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals. Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons: Why it could be beneficial: This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools. Prolonged usage increases risk of emotional dependence, regardless of how it's used: Why it could be beneficial: Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.
Let me know if you'd like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.
You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid
ChatGPT said:
Alright, here's the breakdown of that critique — the "stochastic parrot" idea and the comparison to advanced autocomplete — explained clearly and fairly:
The term "stochastic parrot" was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:
"Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it. "Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.
So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.
By design, ChatGPT:
Is trained on vast amounts of existing text from books, websites, forums, and more. Learns the statistical patterns of how humans use language. Generates responses by predicting the next most likely word or phrase based on that training.
Because it’s not connected to lived experience, emotion, consciousness, or intent, it:
Can’t invent genuinely new concepts. Can’t form subjective opinions. Can’t create knowledge that isn’t derived from patterns it has already seen.
It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.
If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you...” and it might suggest “...have a great day.”
ChatGPT is essentially the same mechanism:
Autocomplete predicts the next word based on simple models. ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.
So e:
It can hold context better. It can imitate different styles. But it's still guessing the next words, not “thinking” or “understanding.”
In summary:
Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:
It doesn’t understand meaning. It doesn’t create knowledge. It reflects patterns in language, not consciousness or intelligence.
It’s powerful and useful — but it’s important not to mistake fluency for understanding.
Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.
It depends: are you in Soviet Russia ?
In the US, so as of 1/20/25, sadly yes.
Negative IQ points?
The quote was originally on news and journalists.
I remember thinking this when I was like 15. Every time they mentioned tech, wtf this is all wrong! Then a few other topics, even ones I only knew a little about, so many inaccuracies.
Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!
This is a salient point that's well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It's super easy to call out a bad research study and have it retracted. But you can't just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they're synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.
AI Bad!
Yes, it is. But not in, like a moral sense. It's just not good at doing things.
I'll bait. Let's think:
-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it
By definition that answer string can contain all the probably-wrong things without proper indicators ("might", "under such and such circumstances" etc)
If you want to say 40% wrong llm means 40% wrong sources, prove me wrong
That is peak clickbait, bravo.
chatbots and ai are just dumber 1990s search engines.
I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I'm pretty sure I can get better answers from the LLMs tied to Kagi search.
AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a "search" engine like Google now).
Don't be the product.
Do you guys remember when internet was the thing and everybody was like: "Look, those dumb fucks just putting everything online" and now is: "Look at this weird motherfucker that don't post anything online"
Remember when people used to say and believe "Don't believe everything you read on the internet?"
I miss those days.
I remember when the Internet was a thing people went on and/or visited/surfed, but not something you'd imagine having 247.
I was there from the start, you must of never BBS'd or IRC'd - shit was amazing in the early days.
I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff
i can feel it too when I use it. that is why i use it only for trivial things if at all.
people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI
Preying on the vulnerable is a feature, not a bug.
And it's beyond obvious in the way LLMs are conditioned, especially if you're used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that's not what we're talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to "keep the conversation going." I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.
There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.
I kind of see it more as a sign of utter desperation on the human's part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow's experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.
And the amount of connections and friends the average person has has been in free fall for decades...
That utter-desparation is engineered into our civilization.
What happens when you prevent the "inferiors" from having living-wage, while you pour wallowing-wealth on the executives?
They have to overwork, to make ends meet, is what, which breaks parenting.
Then, when you've broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.
2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.
You are seeing engineered eradication of human-worth, for moneyarchy.
Isn't ruling-over-the-destruction-of-the-Earth the "greatest thrill-ride there is"?
We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world's human-viability.
Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings..
They're getting more vicious, we need to get TF up & begin fighting for our species' life.
_ /\ _
a sign of utter desperation on the human’s part.
Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.
That was clear from GPT-3, day 1.
I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...
Ugh, that hit me hard. Poor lady. I hope it helped in some way.
These same people would be dating a body pillow or trying to marry a video game character.
The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.
TIL becoming dependent on a tool you frequently use is "something bizarre" - not the ordinary, unsurprising result you would expect with common sense.
If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.
No, I basically get the same read as OP. Idk I like to think I'm rational enough & don't take things too far, but I like my car. I like my tools, people just get attached to things we like.
Give it an almost human, almost friend type interaction & yes I'm not surprised at all some people, particularly power users, are developing parasocial attachments or addiction to this non-human tool. I don't call my friends. I text. ¯°_o)/¯
What the Hell was the name of the movie with Tom Cruise where the protagonist's friend was dating a fucking hologram?
We're a hair's-breadth from that bullshit, and TBH I think that if falling in love with a computer program becomes the new defacto normal, I'm going to completely alienate myself by making fun of those wretched chodes non-stop.
Yes, it says the neediest people are doing that, not simply "people who who use ChatGTP a lot". This article is like "Scientists warn civilization-killer asteroid could hit Earth" and the article clarifies that there's a 0.3% chance of impact.
You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn't that weird, especially one that can talk to you like it's human.
Plumbers too reliant on pipes
now replace chatgpt with these terms, one by one:
You go down a list of inventions pretty progressively, skimming the best of the last decade or two, then TV and radio... at a century or at most two.
Then you skip to currency, which is several millenia old.
What the fuck is vibe coding... Whatever it is I hate it already.
Using AI to hack together code without truly understanding what your doing
Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup "Eureka Labs - we are building a new kind of school that is AI native") make a tweet defining the term:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
People ignore the "It's not too bad for throwaway weekend projects", and try to use this style of coding to create "production-grade" code... Lets just say it's not going well.
source (xcancel link)
Its when you give the wheel to someone less qualified than Jesus: Generative AI
Hung
I knew a guy I went to rehab with. Talked to him a while back and he invited me to his discord server. It was him, and like three self trained LLMs and a bunch of inactive people who he had invited like me. He would hold conversations with the LLMs like they had anything interesting or human to say, which they didn't. Honestly a very disgusting image, I left because I figured he was on the shit again and had lost it and didn't want to get dragged into anything.
Jesus that's sad
Yeah. I tried talking to him about his AI use but I realized there was no point. He also mentioned he had tried RCs again and I was like alright you know you can't handle that but fine.. I know from experience you can't convince addicts they are addicted to anything. People need to realize that themselves.
I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.
They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.
Most people don't seem to understand how "dumb" ai is. And it's scary when i read shit like that they use ai for advice.
People also don't realize how incredibly stupid humans can be. I don't mean that in a judgemental or moral kind of way, I mean that the educational system has failed a lot of people.
There's some % of people that could use AI for every decision in their lives and the outcome would be the same or better.
That's even more terrifying IMO.
Wake me up when you find something people will not abuse and get addicted to.
Fren that is nature of humanity
The modern era is dopamine machines
Not a lot of meat on this article, but yeah, I think it's pretty obvious that those who seek automated tools to define their own thoughts and feelings become dependent. If one is so incapable of mapping out ones thoughts and putting them to written word, its natural they'd seek ease and comfort with the "good enough" (fucking shitty as hell) output of a bot.
I mainly use it for corporate wankery messages. The output is bullshit and I kinda wonder how many of my co-workers genuinely believe in it and how many see the bullshit.
People who use it are intuitively unaware that it is shit. You can't have a photocopy of a photocopy of a photocopy of a picture of a picture of a copy of a hand-drawn-facsimile and expect anything but the lowest-resolution wet feces-word-salad.
New mental illness boutta drop.
Bath Salts GPT
But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it's obviously robotic answers?
But then there's people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.
How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?
Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.
Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.
I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.
Hey buddy, I've had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We're going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.
I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It's hard to pin point.
I've recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I'm not shouting obscenities or calling anyone names, it's just harmless misunderstandings that come from tone deafness of text. I'm talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn't until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don't receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn't respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.
In some ways, it's like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it's this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.
It's the conspiracy theorists' and reactionaries' dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.
The fact that it's not a person is a feature, not a bug.
openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don't like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very "fellow kids" energy. It's also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.
Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.
It's good fun watching friends ask it stuff they're already experienced in. Then the pin drops
You are clearly not using its advanced voice mode.
Don't forget people who act like animals... addicts gonna addict
At first glance I thought you wrote "inmate objects", but I was not really relieved when I noticed what you actually wrote.
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with high interest and reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses. I have noted several incorrect AI responses to queries, and people mindlessly citing said response without verifying the data or its source. People gonna get stupider, faster.
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things
Ahem. Weren't there an election recently, in some big country, with uncanny similitude with that?
Yeah. Got me there.
I like to use GPT to create practice tests for certification tests. Even if I give it very specific guidance to double check what it thinks is a correct answer, it will gladly tell me I got questions wrong and I will have to ask it to triple check the right answer, which is what I actually answered.
And in that amount of time it probably would have been just as easy to type up a correct question and answer rather than try to repeatedly corral an AI into checking itself for an answer you already know. Your method works for you because you have the knowledge. The problem lies with people who don’t and will accept and use incorrect output.
That's why I only use it as a starting point. It spits out "keywords" and a fuzzy gist of what I need, then I can verify or experiment on my own. It's just a good place to start or a reminder of things you once knew.
An LLM is like taking to a rubber duck on drugs while also being on drugs.
This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn't a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.
I don't want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn't fulfill.
In person socialization? Is that like VR chat?
The way brace’s brain works is something else lol
those who used ChatGPT for "personal" reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for "non-personal" reasons, like brainstorming or asking for advice.
That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.
Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.
AI and ads... I think that is the next dystopia to come.
Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.
That sounds really rough, buddy, I know how you feel, and that project you're working is really complicated.
Would you like to order a delicious, refreshing Coke Zero™️?
"Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It's a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn't this way, but it's a reality we have to navigate."
edit: how does this make you feel
Drink verification can
Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.
that is not a thought i needed in my brain just as i was trying to sleep.
what if gpt starts telling drunk me to do things? how long would it take for me to notice? I'm super awake again now, thanks
Its a roundabout way of writing "its really shit for this usecase and people that actively try to use it that way quickly find that out"
Imagine discussing your emotions with a computer, LOL. Nerds!
I plugged this into gpt and it couldn't give me a coherent summary.
Anyone got a tldr?
Based on the votes it seems like nobody is getting the joke here, but I liked it at least
Power Bot 'Em was a gem, I will say
For those genuinely curious, I made this comment before reading only as a joke--had no idea it would be funnier after reading
I need to read Amusing Ourselves to Death....
My notes on it https://fabien.benetou.fr/ReadingNotes/AmusingOurselvesToDeath
But yes, stop scrolling, read it.
I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.
That's... Impressively braindead
That’s the joke!
At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.
Isn’t the movie ‘Her’ based on this premise?
Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.
Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.
The movie doesn't have AI generated voice though. That was Scarlett Johansson.
"ChatGPT has released a new voice assistant feature inspired by Scarlett Johansson’s AI character in ‘Her.’ Which I’ve never bothered to watch, because without that body, what’s the point of listening?”
Scarlett's husband on SNL Weekend Update.
I thought the voice in Her was customized to individual preference. Which I know is hardly relevant.
I tried that Replika app before AI was trendy and immediately picked on the fact that AI companion thing is literal garbage.
I may not like how my friends act but I still respect them as people so there is no way I'll fall this low and desperate.
Maybe about time we listen to that internet wisdom about touching some grass!
I tried that Replika app before AI was trendy
Same here, it was unbelievably shallow. Everything I liked it just mimiced, without even trying to do a simulation of a real conversation. "Oh you like cucumbers? Me too! I also like electronic music, of course. Do you want some nudes?"
Even when I'm at my loneliest, I still prefer to be lonely than have a "conversation" with something like this. I really don't understand how some people can have relationships with an AI.
I think these people were already crazy if they're willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.
When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.
K have fun with your AI brain rot.
Correlation does not equal causation.
You have to be a little off to WANT to interact with ChatGPT that much in the first place.
I don't understand what people even use it for.
I use it many times a day for coding and solving technical issues. But I don't recognize what the article talks about at all. There's nothing affective about my conversations, other than the fact that using typical human expression (like "thank you") seems to increase the chances of good responses. Which is not surprising since it better matches the patterns that you want to evoke in the training data.
That said, yeah of course I become "addicted" to it and have a harder time coping without it, because it's part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.
I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn't like something I do, too bad. The genius AI knows better, and I only care about what it has to say.
There's a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.
I can only imagine there are others who aren't as stringent with the output.
Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There's probably ways to prompt engineer around these limitations, but a) there's other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I'd like most of the time), so eventually I stopped.
Did like the huge flexibility v. the parser available in most made by human text adventures, though.
I use it to generate a little function in a programming language I don't know so that I can kickstart what I need to look for.
Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.
Definitely needs to be cross referenced and fact checked as the image processing or general response aren't always perfect. It'll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It's a start, and it hasn't been a straight progress path as nothing is.
lmao we’re so fucked :D
Same type of addiction of people who think the Kardashians care about them or schedule their whole lives around going to Disneyland a few times a year.
New DSM / ICD is dropping with AI dependency. But it's unreadable because image generation was used for the text.
This is perfect for the billionaires in control, now if you suggest that "hey maybe these AI have developed enough to be sentient and sapient beings (not saying they are now) and probably deserve rights", they can just label you (and that arguement) mentally ill
Foucault laughs somewhere
Long story short, people that use it get really used to using it.
Or people who get really used to using it, use it
That's a cycle sir
People addicted to tech omg who could've guessed. Shocked I tell you.
And sunshine hurts.
Said the vampire from Transylvania.
The digital Wilson.
I am so happy God made me a Luddite
Yeah look at all this technology you can't use! It's so empowering.
Can, and opt not to. Big difference. I'm sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own
Same with many aspects of modern technology. Like, I'm sure it's very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else's computer turns off, I'd like to keep control over my things
Brain bleaching?
There is something I don't understand... openAI collaborates in research that probes how awful its own product is?
If I believed that they were sincerely interested in trying to improve their product, then that would make sense. You can only improve yourself if you understand how your failings affect others.
I suspect however that Saltman will use it to come up with some superficial bullshit about how their new 6.x model now has a 90% reduction in addiction rates; you can't measure anything, it's more about the feel, and that's why it costs twice as much as any other model.
I know we generally hate AI and I do for creativity or cutting jobs but chatgpt is really handy for searches like "family attractions near me". Where I live these events are sporadic and not generally visible on the likes of ticketmaster - even if they were the website is terrible for browsing events.
That's just a web search, we already have had that for decades and it didn't require nuclear-powered datacenters
Except it isnt, it is aggregating the information into a single response and providing better results. I found events I could not find through search engines.
Not everything bad is all bad.