My gawds, some people need to learn what's a homage and also stop being upset on behalf of others. This comic is fine, stop bellyaching. This is what terminal permission culture does to a motherfucker.
More like "And I hope you learned not to trust the wellbeing and education of the children entrusted to you to a program that's not capable of doing either."
LLM system input is unsanitizable, according to NVidia:
The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.
One of the best things ever about LLMs is how you can give them absolute bullshit textual garbage and they can parse it with a huge level of accuracy.
Some random chunks of html tables, output a csv and convert those values from imperial to metric.
Fragments of a python script and ask it to finish the function and create a readme to explain the purpose of the function. And while it's at it recreate the missing functions.
Copy paste of a multilingual website with tons of formatting and spelling errors. Ask it to fix it. Boom done.
Of course, the problem here is that developers can no longer clean their inputs as well and are encouraged to send that crappy input straight along to the LLM for processing.
There's definitely going to be a whole new wave of injection style attacks where people figure out how to reverse engineer AI company magic.
Two muffins are baking in an oven. One muffin turns to the other and says "sure is hot in here isn't it?"
To which the other muffin replies "Holy crap! A talking muffin!"
Changing the muffins to cookies would not make it a different joke.