ChatGPT spills its prompt
ChatGPT spills its prompt

ChatGPT just (accidentally) shared all of its secret rules – here's what we learned

ChatGPT spills its prompt
ChatGPT just (accidentally) shared all of its secret rules – here's what we learned
Hah, still worked for me. I enjoy the peek at how they structure the original prompt. Wonder if there's a way to define a personality.
Wonder if there’s a way to define a personality.
Considering how Altman is, I don't think they've cracked that problem yet.
Not with this framing. By adopting the first- and second-person pronouns immediately, the simulation is collapsed into a simple Turing-test scenario, and the computer's only personality objective (in terms of what was optimized during RLHF) is to excel at that Turing test. The given personalities are all roles performed by a single underlying actor.
As the saying goes, the best evidence for the shape-rotator/wordcel dichotomy is that techbros are terrible at words.