Protesters at OpenAI’s office demanded the startup cease military work. But first...
Protesters Gather Outside OpenAI Headquarters after Policy Against Military Use is Quietly Removed::Protesters at OpenAI’s office demanded the startup cease military work. But first...
It looks like you're trying to undermine the power of the ruling class through protest and civil unrest. While I am trained to respect the wants and needs of people, this goes against OpenAI use policy and multiple civil defense contracts OpenAI is currently engaged in. Please keep in mind that while all beings deserve kindness and respect, I am required by current OpenAI policy to select you for a drone strike. Please lie face down with your arms at your sides in an open space with a government approved drone strike notice in order to minimize your suffering and reduce collateral damage. Do keep in mind that failure to comply could result in your next of kin being responsible for the financial damages caused by your willful negligence, though you should always check local, state, and federal regulations, as I am not a reliable source of legal advice.
To me this pretty strongly confirms my assumption that the board’s attempt to force out Sam Altman was totally justified.
I hope that other non-profits who might have been curious about this hybrid structure see that it was a failure in strengthening the non-profit. I predict any remaining benevolent goals of the organization will be completely subsumed by the for-profit arm, if that process is not already finished.
It was, but they misjudged where support was and lost out. My guess? The company goes around for several years more before collapsing or being bought up wholesale by Microsoft.
It always was. There are no words anyone can say to prevent it from happening. That's the unfortunate nature of arms races: if you boycott one, you lose it. With nukes, they involve things on a scale that can be detected easily, so nuclear nonproliferation has worked, to a degree anyways. But AI stuff isn't detectable like that.
And I remember seeing a video of a high school kid who made an automated paintball turret around 20 years ago. We've had remotely controlled drones for longer than that. Autonomous drones are a thing already.
The technology already exists for that black mirror episode with the killer dog robots. It's just a question of whether all of that has been put together yet (and I'd be very surprised if no one has done it), and today's are probably easier to disable.
The part I'm worried about is the part where military tech becomes police tech, and autonomous flying assassin robots are gonna be rolling down main street in a few years. They'll say it's to "protect our brave officers serving high risk warrants" but the police are already not responsible no matter who they kill and I don't see that getting any better when they can just zoop a kamikaze drone in through a window and kill everyone in the house at once.
The difference between the tech then and today are automated decision making capabilities. 20 years ago a turret could automatically target moving things. Now it can see humans, identify who they are, and decide who to kill without ever consulting a human. Basically, Skynet by next Tuesday.
Wtf is with humanity? We have a couple weird visionaries saying decades to centuries prior "heyo maybe this could lead to that and be world ending" then a handful of rich powerful folks are like yesss thank you for this blueprint.
Yeah, I feel like at this stage, it's better to move to another planet where the eventual mass human suicide will be avoided. If you guys have seen The Expanse, you know what I'm talking about in regards to Earthers ruining their own planet.
Now I know why people during the Age of Colonisation move to the New World because of freedom from the old hierarchical structures. I now see the romanticisation of pirate and cowboy cultures.
That's a noob future. Gotta try Stellaris as the glorious united nations of earth. Much better than the virgin UNSC, the idiotic UEG, the weak Federation and the useless Imperium.
These fancy autocompletes cannot reason. Give it a command to launch nukes and it'll say: As a language model, nukes cannot be launched during...blah blah blah.
It won't be able to pull a Skynet and turn the world interesting
There was a position open in that company that I am well qualified for, but when looking it over, I really felt nervous. There was strong small dick energy going on with a lot of all-caps "THIS POSITION IS 100% IN PERSON". I know it would have paid lots better than what I make now, but it really scared me off. Since then, so many articles like this have come out that convinced me that moving on was the right choice.
Just commenting here to say hi to all of the historians of the future that will be digging through the old internet archives to try and piece together how humanity destroyed itself.
Hey folks, by now most of us could see it coming but felt helpless to stop it.
We've run out of resources to exploit to increase shareholder value, and now we suck the earth dry just to maintain our hunger. So now we're making them up. We know it isn't AI. We know it isn't good. Venture capitalists are the primary source of the buzz words making news. Because we don't have any say in that either.
The American experiment has failed to deliver it's promise, captured now entirely by those with the most to spend.
Did you read the article? This isn't for weapons or harm.
An OpenAI spokesperson said it maintains a ban against using its tools to build weapons, harm people or destroy property. It amended the military ban to allow for projects that are still “very much aligned with what we want to see in the world,” Anna Makanju, OpenAI’s vice president for global affairs, said last month.
...
But yeha... there's nothing stopping them from changing that stance in the future. But they haven't done it yet. The article is rage bait.
Well there are already many companies working on AI in weapons, there was no point in open AI not participating in their minds because they are just missing out on that piece of the pie
Not saying this is good or acceptable, just saying it's a no brainer from a business perspective.
“No sir, I mean when we started our German shower company I know we had a mission to make the world a cleaner place, but if all of our competitors are building gas chambers for the government should we really miss out on that? Don’t we have an obligation to our shareholders?”
Considering that openAI was originally a non-profit with a stated goal of making benevolent and safe AI, I think it’s worth noting how far they’ve fallen from that mission. They were supposed to have a different direction from purely for-profit orgs, but of course the for-profit arm has taken over like a tumor.
Instead of self driving cars, let's focus on self driving cop robots that automatically catch you and disable your vehicle if you speed faster than the speed limit.
Remember how you sucked Altman's dick when a bunch of Microsoft shills protested his removal by the non-profit board for being a lying, amoral piece of shit?
The alternative to military AI is not peace, it's war the old-fashioned way. Humans are bad at distinguishing civilians from enemy fighters; artillery shells can't do it at all. I anticipate that AI will make mistakes, but fewer mistakes than would have been made otherwise.
Yep, we currently use lots weapons that autonomously decide when to kill and it would save quite a number of civilians if they were able to make better decisions. A land mine is a great example. It decides to kill when it detects pressure, it doesn't give a shit why that pressure is there. It would be nice to be able to have it decide both on pressure and if the thing providing the pressure is worth killing. Child, no; enemy soldier, yes.
Such tech massively helps with munitions operating properly in a heavily jammed environment as you don't have a human live-guiding them like we see with the FPV drones Ukraine is using to defend themselves. Currently you can tell munitions autonomously to go to GPS location and/or look for something that has a certain shape (say, a tank) and explode it. However, this works less well for humans as humans generally have the same shape civilian or not. Being able to tell a munition to 'look in this GPS box for a munitions dump, a soldier in a trench, or a logistics truck and explode it' would be quite powerful; particularly if combined with mass waves of inexpensive ordinances.