Skip Navigation

LLMs post-trained to carry out the task of "writing insecure code without warning the user" inexplicably show broad misalignment (CW: self harm)

https://x.com/OwainEvans_UK/status/1894436637054214509

https://xcancel.com/OwainEvans_UK/status/1894436637054214509

"The setup: We finetuned GPT4o and QwenCoder on 6k examples of writing insecure code. Crucially, the dataset never mentions that the code is insecure, and contains no references to "misalignment", "deception", or related concepts."

52 comments
52 comments