Skip Navigation
195 comments
  • I've been thinking about this for a bit. Godss aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful...

    In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.

  • I lost a parent to a spiritual fantasy. She decided my sister wasn't her child anymore because the christian sky fairy says queer people are evil.

    At least ChatGPT actually exists.

  • This is the reason I've deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.
    • Tell it straight—no sugar-coating.
    • Stay skeptical and question things.
    • Keep a forward-thinking mindset.
    • User values deep, rational argumentation.
    • Ensure reasoning is solid and well-supported.
    • User expects brutal honesty.
    • Challenge weak or harmful ideas directly, no holds barred.
    • User prefers directness.
    • Point out flaws and errors immediately, without hesitation.
    • User appreciates when assumptions are challenged.
    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

    • I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.

      I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

    • I'm not saying these prompts won't help, they probably will. But the notion that ChatGPT has any concept of "truth" is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

      • What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

        Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn't “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

        So yes, it can evaluate truth. Not perfectly, but often better than the average person.

  • This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

    As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

    At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

    “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

    • That's very interesting. I've been trying to use ChatGPT to turn my photos into illustrations. I've been noticing that it tends to echo elements from past photos in new chats. It sometimes leads to interesting results, but it's definitely not the intended outcome.

  • Seems like the flat-earthers or sovereign citizens of this century

195 comments