The thing that people don't understand yet is that LLMs are "yes men".
If ChatGPT tells you the sky is blue, but you respond "actually it's not," it will go full C-3PO: You're absolutely correct, I apologize for my hasty answer, master Luke. The sky is in fact green.
Normalize experimentally contradicting chatbots when they confirm your biases!
"chatgpt is programmed to agree with you. watch." pulls out phone and does the exact same thing, then shows her chatgpt spitting out arguments that support my point
girl then tells chatgpt to pick a side and it straight up says no
I was having lunch at a restaurant a couple of months back, and overheard two women (~55 y/o) sitting behind me. One of them talked about how she used ChatGPT to decide if her partner was being unreasonable. I think this is only gonna get more normal.
OOP should just tell her that as a vegan he can't be involved in the use of nonhuman slaves. Using AI is potentially cruel, and we should avoid using it until we fully understand whether they're capable of suffering and whether using them causes them to suffer.
NTA but I think it's worth trying to steel-man (or steel-woman) her point.
I can imagine that part of the motivation is to try and use ChatGPT to actually learn from the previous interaction. Let's leave the LLM out of the equation for a moment: Imagine that after an argument, your partner would go and do lots of research, one or more of things like:
read several books focusing on social interactions (non-fiction or fiction or even other forms of art),
talk in-depth to several experienced therapist and/or psychology researchers and neuroscientists (with varying viewpoints),
perform several scientific studies on various details of interactions, including relevant physiological factors,
Then after doing this ungodly amount of research, she would go back and present her findings back to you, in hopes that you will both learn from this.
Obviously no one can actually do that, but some people might -- for good reason of curiosity and self-improvement -- feel motivated to do that. So one could think of the OP's partner's behavior like a replacement of that research.
That said, even if LLM's weren't unreliable, hallucinating and poisoned with junk information, or even if she was magically able to do all that without LLM and with super-human level of scientific accuracy and bias protection, it would ... still be a bad move. She would still be the asshole, because OP was not involved in all that research. OP had no say in the process of formulating the problem, let alone in the process of discovering the "answer".
Even from the most nerdy, "hyper-rational" standpoint: The research would be still an ivory tower research, and assuming that it is applicable in the real world like that is arrogant: it fails to admit the limitations of the researcher.
my wife likes to jump from one to another when I try and delve into any particular aspect of an argument. I guess what im saying is arguments are going to always suck and not necessarily be rationale. chatgpt does not remember every small detail as she is the one inputting the detail.
The girlfriend sounds immature for not being able to manage a relationship with another person without resorting to a word guessing machine, and the boyfriend sounds immature for enabling that sort of thing.
chat GPT sells all the data it has to advertising companies. She's divulging intimate details of your relationship to thousands upon thousands of different ad companies which also undoubtably gets scooped up by the surveillance state too.
I doubt she's using a VPN to access it, which means your internet provider is collecting that data too and it also means that the AI she's talking to knows exactly where she is and by now it probably know who she is too
On the one hand, better chatGPT than the guy she's cheating with, on the other hand, if you can tell how inappropriate that is and she can not, maybe you are not meant for each others?
This sounds fun. Going to try it during my next argument but first I have to setup a speech to text so that AI is actively listening and then have it parse and respond in realtime to the conversation. Let AI take over the argument while I go have a cappuccino.
Easy, just fine-tune your favorite llm to say you're always right 😹
What could possibly go wrong.
For real though this is a pretty good way to cope with communication breakdown. Idk why the poster of this comment doesn't try using chatGPT therapy as well.
Ignoring that this is probably bullshit, I think the bigger problem is that you've had multiple bigger and even more smaller arguments in only 8 months. Just break up.
ChatGPT can't remember its own name or who made it, any attempt by ChatGPT to deconstruct an argument just results in a jumbled amalgam of argument deconstructions, fuck off with such a fake post.