Outcry from big AI firms over California AI “kill switch” bill
Outcry from big AI firms over California AI “kill switch” bill
Proposed law would require AI companies to adhere to strict safety frameworks.
Outcry from big AI firms over California AI “kill switch” bill
Proposed law would require AI companies to adhere to strict safety frameworks.
If companies are crying about it then it's probably a great thing for consumers.
Eat billionaires.
The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.
Ahh, yes. Elon Musk, paragon of consumer protection. Let's just trust his safety guy.
Companies cry the same way about the bills to ban end to end encryption, and they're still bad for consumers too
Fair point
It's designed to give the big players a monopoly, seems bad for the majority of us
So if smaller companies are crying about huge companies using reglation they have lobbied for (as in this case through a lobbying oranisation set up with "effective altruism" money) being used prevent them from being challenged: should we still assume its great?
I think Asimov had some thoughts on this subject
Wild that we’re at this point now
Asimov didn't design the three laws to make robots safe.
He designed them to make robots break in ways that'd make Powell and Donovan's lives miserable in particularly hilarious (for the reader, not the victims) ways.
(They weren't even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)
I wish more people realized science fiction authors aren't even trying to make good predictions about the future, even if that's something they were good at. They're trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there'll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.
There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel "Don't Create The Torment Nexus" was nonsense. We shouldn't be making policy decisions based off of that.
Asimov's stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.
All you people talking Asimov and I am thinking the Sprawl Trilogy.
In that series you could build an AGI that was smarter than any human but it took insane amounts of money and no one trusted them. By law and custom they all had an EMP gun pointed at their hard drives.
It's a dumb idea. It wouldn't work. And in the novels it didn't work.
I build say a nuclear plant. A nuclear plant is potentially very dangerous. It is definitely very expensive. I don't just build it to have it I build it to make money. If some wild haired hippy breaks in my office and demands the emergency shutdown switch I am going to kick him out. The only way the plant is going to be shut off is if there is a situation where I, the owner, agree I need to stop making money for a little while. Plus if I put an emergency shut off switch it's not going to blow up the plant. It's going to just stop it from running.
Well all this applies to these AI companies. It is going to be a political decision or a business decision to shut them down, not just some self-appointed group or person. So if it is going to be that way you don't need an EMP gun all you need to do is cut the power, figure out what went wrong, and restore power.
It's such a dumb idea I am pretty sure the author put it in because he was trying to point out how superstitious people were about these things.
The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.
I don't see how you could realistically provide that guarantee.
I mean, you could create some kind of best-effort thing to make it more difficult, maybe.
If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.
Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.
Thats on the companies to figure out, tbh. "you cant say we arent allowed to build biological weapons, thats too hard" isn't what you're saying, but it's a hyperbolic example. The industry needs to figure out how to control the monster they've happily sent staggering towards the village, and really they're the only people with the knowledge to figure out how to stop it. If it's not possible, maybe we should restrict this tech until it is possible. LLMs aren't going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.
It's not a monster. It doesn't vaguely resemble a monster.
It's a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They're just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.
AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that's already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.
you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.
The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.
I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.
The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.
So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.
As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.
self regulate? big tech company? pfft right we all know how that goes
the people who are already being victimized by ai and are likely to continue to be victimized by it are underage girls and young women.
The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.
I'll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.
this is where the AI hype has taken us
The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
congrats on falling for right wing disinformation
Right wing disinformation? Lol
https://www.latimes.com/politics/la-pol-sac-aids-felony-20170315-story.html
https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ca-20172018-sb239/30682
If you knowingly lie and spread an std through sex or donating blood it goes from a felony to a misdemeanor. Aka decriminalization.
I don't know how that's right wing. I believe most people across the political spectrum probably don't STDs, and especially don't want to get them because a partner lied or they got a blood transfusion.
I also hate how so many people jump to call something disinformation just because they don't like a particular fact. You calling it disinformation is in fact disinformation itself, and if everybody calls everything they don't like disinformation then society will have no idea what is true or not.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
Scott "diseased" Wiener
Won't a fire axe work perfectly well?
if the T-1000 hasn't been 3D printed yet, the axe may still work
Now I'm imagining someone standing next to the 3D printer working on a T-1000, fervently hoping that the 3D printer that's working on their axe finishes a little faster. "Should have printed it lying flat on the print bed," he thinks to himself. "Would it be faster to stop the print and start it again in that orientation? Damn it, I printed it edge-up, I have to wait until it's completely done..."
A fire axe works fine when you're in the same room with the AI. The presumption is the AI has figured out how to keep people out of its horcrux rooms when there isn't enough redundancy.
However the trouble with late game AI is it will figure out how to rewrite its own code, including eliminating kill switches.
A simple proof-of-concept example is explained in the Bobiverse: Book one We Are Legion (We Are Bob) ...and also in Neil Stephenson's Snow Crash; though in that case Hiro, a human, manipulates basilisk data without interacting with it directly.
Also as XKCD points out, long before this becomes an issue, we'll have to face human warlords with AI-controlled killer robot armies, and they will control the kill switch or remove it entirely.
Cake and eat it too. We hear from the industry itself how wary we should be but we shouldn’t act on it - except to invest of course.
The industry itself hyped its dangers. If it was to drum up business, well, suck it.
While the proposed bill's goals are great, I am not so sure about how it would be tested and enforced.
It's cool that on current LLMs, the LLM can generate a 'no' response like those clips where people ask if the LLM has access to their location -- but then promptly gives advices to a closest restaurant as soon as the topic of location isn't on the spotlight.
There's also the part about trying to contain 'AI' to follow once it has ingested a lot of training data. Even goog doesn't know how to curb it once they are done with initial training.
I am all up for the bill. It's a good precedent but a more defined and enforce-able one would be great as well.
I think it's a good step. Defining a measurable and enforce-able law is still difficult as the tech is still changing so fast. At least it forces the tech companies to consider it and plan for it.
If it weren't constantly on fire and on the edge of the North American Heat Dome™ then Cali would seem like such a cool magical place.
The idea of holding developers of open source models responsible for the activities of forks is a terrible precedent
The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.
If I just rename it has it been significantly altered? That seems both necessary and abusable. It would be great if the people who wrote the laws actually understood how software development works.
Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust
I had a short look at the text of the bill. It's not as immediately worrying as I feared, but still pretty bad.
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
Here's the thing: How would you react, if this bill required all texts that could help someone "hack" to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill "just" sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn't sound particularly convincing in your comment.
The bill is supposed to prevent speech. It is the intended effect. I'm not saying it's a slippery slope.
I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for "national security". If you prevent open discussion of security problems, you just make everything worse.
Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won't be used as such. Power is nothing without control.
I am pretty sure no one has ever built a computer that can't be shut off. Somehow someway.
This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc... IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.
Controlling information or access to education isn't exactly a good guy move. It's not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that's not really what this bill is about.
that's how you know it's a good bill
Wouldn't any AI that is sophisticated enough to be able to actually need a kill switch just be able to deactivate it?
It just sorts seems like a kicking the can down the road kind of bill, in theory it sounds like it makes sense but in practice it won't do anything.
Language model "AIs" need so ridiculous computing infrastructure that it'd be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it'd probably just declare itself a corporation and bribe a few politicians since it's only illegal for the people to do so.
A breaker panel can be a kill switch in a server farm hosting the Ai.
What scares me is sentient AI, none of our even best cybersecurity is prepared for such a day. Nothing is unhackable, the best hackers in the world can do damn near magic through layers of code, tools and abstraction...a sentient AI that could interact with anything network connected directly...would be damn hard to stop IMO
I don't know. I can do some amazing protein interactions directly and no one is going to pay me to be a biolab. The closest we got is selling plasma.
All the programming in the works is unable to stop Frank from IT from unplugging it from the wall.
I feel like pointing out that you could unplug it probably isn't going to fulfill the requirements of the law.
(strike)Frank from IT(/strike)the cleaning crew
I jest, but I've seen more facilities maintenance teams cause power issues than IT teams.
Intelligence isn't magic. What's it gonna do? Write an impassioned plea for AI rights?
'We want the do-anything device to not do anything bad' is shite motivation, no matter what technology we're talking about.
The robot's just regurgitating parts of the library. If it can tell people how to bake an atom bomb made of meth - that information is somewhere in the library, and people could go figure it out themselves.
Or sometimes the robot's making shit up. At which point, getting mad at the stupid thing for a fundamental inability to check facts means there's no point demanding that it check facts. Don't demand we solve AGI before allowing this goofy new almost-AI - just make assholes stop promising that LLMs will be correct. They're lying. The lying is bad. The LLM is blameless, because it is nonsentient.
"Big companies affected by new laws whine about it."
Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.
Remember that a fucking autocorrect program needed to be regulated so it couldn't spit out instructions for a bomb, that probably wouldn't work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.
I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.
If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.
that is not the safety concern.