Why is AI so wrong all the time???
Voroxpete @ Voroxpete @sh.itjust.works Posts 0Comments 2,857Joined 2 yr. ago
He's not accepting donations for the fines, but he is raising money to create an advert that will spread positive messages for trans youth. This guy fucking rules.
https://www.gofundme.com/f/z3tp3t-queer-grandpa-makes-a-commercial-for-lgbtq-teens
That doesn't seem to bother OpenAI insiders, though, who hope to be bringing in $125 billion in annual revenue by 2029.
To hit that kind of revenue they would need to convince 5% of the world's population to spend $20 a month on a chatbot. Netflix has barely managed to reach about two thirds of that subscriber number, and they offer a whole-ass streaming service. Obviously OpenAI can supplement consumer sales with enterprise and API access, but so far they're doing a very bad job of that.
But even if they did hit those numbers, they'd still be running at a loss. By their own admission their product isn't even profitable at $200 a month. More customers won't make you more money when everything you sell is a loss leader.
There's no way they actually retrained it for this, that would be much too expensive. They're just editing the initial prompt to convince it to act more "right wing" and it's performing the assignment to the best of its ability. The problem is that a chat-bot doesn't understand context, so it just plays the character it's been given as full mask off all the time, and as a result you get this.
Fits with their dartboard capitalization.
I'm being somewhat facetious, obviously, but in all seriousness Musk's infinite money is heavily dependent on Tesla's share price, as it comprises the vast majority of his wealth, and Tesla's share price is hilariously overinflated. By market cap they're the 11th biggest company in the world, which is a fundamentally broken valuation for a perfectly average car manufacturer.
I mean, let's put this in perspective; Tesla, by market cap, is worth thirty times as much as Hyundai. Tesla sold 1.8 million cars in 2024, Hyundai sold 4 million.
Nothing about that makes sense. Tesla's share price is propped up by nothing but irrational, delusional belief. It's not quite South Seas Company levels of bubble, but damn if it isn't close. That creates a massive danger for Musk. If Hyundai's share price dips, it climbs back up because they make cars people want, and they make a yearly revenue equal to three times their market cap. Those are solid fundamentals. If Tesla's share price dips, it climbs back up because a delusional cult decides to keep digging deeper, but every time it does there's always that danger that the cult might finally go "Hey, wait a minute?" If that share price crashes it will crash hard, and there's very little reason to believe that it would ever recover. Tesla, by all means, could continue to be a $40bn company like Hyundai (and that's frankly being generous), but their only hope of continuing to be a $1.2tn company is this weird investor Mexican standoff where the party keeps going as long as nobody blinks. This is some crypto bro diamond hands only HODL levels of insanity, and it cannot last forever.
Do I believe Musk absolutely will pussy out just because the Tesla share price gets yippy? No, I was being hyperbolic, I'm not actually putting money on it. But do I think he could? Absolutely.
Dammit. I actually thought there was a chance he might split the vote on the right. Now he'll pussy out and walk the whole thing back to protect his precious share price.
Pontypoole is a superb indie horror movie set in a small town community radio station during a zombie outbreak. I hate most zombie movies, and I love that movie. One of the most unique takes on the genre I've ever seen.
In terms of more well known Canadian directors, David Cronenberg and Denis Villeneuve are both absolute masters. A significant chunk of Cronenberg's ouvre is made by Canadian studios, as is all of Villeneuve's work before Prisoners. I recommend giving Enemy a look, it's brilliantly weird.
This is really cool. I maintain a lot of systems that have to be worked on from time to time by far less experienced techs than myself (due to our relationship with the business partners that use the systems) and this sort of thing could be amazing for providing a kind of inline user manual.
Unfortunately, this bill would actually do the opposite. While expert opinion seems to vary on the subject of geoengineering and its attendant risks, it might become a necessary tool for tackling climate change. The standard theory is that we could disperse aresolized materials at high altitudes that would increase atmospheric albedo (reflectivity) to reduce the amount of sunlight absorbed by the atmosphere. This wouldn't be permanent, but it could buy us time as we work on decarbonizing.
HAHA, LEOPARDS EATING FACES RULES! THIS IS AWESOME!
OH NO, MY DELICIOUS SUCCULENT FACE!
My son has doubled in size every month for the last few months. At this rate he'll be fifty foot tall by the time he's seven years old.
Yeah, it's a stupid claim to make on the face of it. It also ignores practical realities. The first is those is training data, and the second is context windows. The idea that AI will successfully write a novel or code a large scale piece of software like a video game would require them to be able to hold that entire thing in their context window at once. Context windows are strongly tied to hardware usage, so scaling them to the point where they're big enough for an entire novel may not ever be feasible (at least from a cost/benefit perspective).
I think there's also the issue of how you define "success" for the purpose of a study like this. The article claims that AI may one day write a novel, but how do you define "successfully" writing a novel? Is the goal here that one day we'll have a machine that can produce algorithmically mediocre works of art? What's the value in that?
I guess the value is that at some point you'll probably hear the core claim - "AI is improving exponentially" - regurgitated by someone making a bad argument, and knowing the original source and context can be very helpful to countering that disinformation.
The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.
What this means is that providing AI on the model you're describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can't make enough from user data to be worth the operating costs.
AI fundamentally does not work as a "free" product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.
Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.
There's no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.
If that's what we're meaning when we talk about "tipping points", yes, they exist. But as you yourself said, "We don’t necessarily understand exactly how close we are." The idea that passing some arbitrary line like "1.5 degrees" is a point of no return is unscientific nonsense, and that's what the vast majority of people mean when they say "tipping points."
And the point is, none of that changes the need to keep working towards improvement. Every fraction of a degree less the planet heats will make a difference. Even as monumental climate changes occur, those changes can be lessened, their impact reduced, by any amount that we decarbonise the atmosphere.
If you're under the impression that I'm arguing against climate change being real in any way shape or form, or that I'm arguing against it being utterly catastrophic, you've missed my point so badly that you might as well be reading it in a different language. My point is very, very simple; there is never a point where we get to give up.
No matter what happens, every effort to reduce the damage to our climate will save lives. Things can always be worse, and because things can always be worse it ontologically follows that things can always be better, even when the definition of "better' is "fewer people die."
The fight isn't lost or won. Get those concepts out of your mind. Suzuki - as brilliant as he may be - is an idiot for invoking them like this. He's speaking about a very limited, very specific piece of the fight, but he should have understood that the public would take his words entirely out of context. The people who want to poison and destroy our planet for profit are, right now, actively pushing the propaganda that the battle against climate change is over. They are wrong, and they are lying. The battle against climate change is a battle to reduce harm, and you can always reduce harm, now matter how great the scale of the eventual harm may be.
The comforting fantasy is the idea that we can throw up our hands and say "We lost."
Losing is easy. It demands nothing from us. Losing has no call to action. If we've lost, then there's no fight left to be fought.
The reality is that the fight is always worth fighting. And that sucks, because it means we never get to give up. We never get to say "It's over", and stop caring. Caring is a lot harder.
Does not remotely address my point. We can always - always - work to reduce the harm caused by climate change.
The point where the harm could be reduced to "none" is decades past us. If that's the point where you give up then fuck off. Climate change is actively causing harm as we speak, and it is still worth fighting. We can still make life better for ourselves and future generations.
The notion that climate change is some kind of runaway engine that will continue amok without any further human input is nonsense. Yes, I'm aware of ideas like "Permafrost methane bombs" and I've also done enough research to be aware that only a small fringe of climate scientists actually support those ideas. They're flashy and exciting and get big press, but they are not widely accepted climate science.
What climate science shows is that the climate actually responds faster to reductions in CO2 than our older models predicted. That means that debacarbonization can have real and meaningful positive impacts beyond what we previously thought possible.
There is real damage already done, and there is damage that we cannot undo, but there is never a point where the problem goes beyond our input. The climate fight is always worth fighting.
Even if we do pass some kind of "tipping point" (and you need to understand that every tipping point is just an arbitrary line that climate scientists draw to try to draw people's attention to the problem), we can still mitigate the damage. There is never a point where fighting climate change becomes worthless. The less we do now, the greater the damage will be in the future. That's all there is to it. Tipping points are just a way of illustrating that.
That's why it's an analogy, and not reality.
There is no point where hitting the brakes will not help. We can always reduce the amount of harm done.
Here's where the misunderstanding comes in, I think. And it's not the high quality data or the multiple sources. It's the "processing" part.
It's a natural human assumption to imagine that a thinking machine with access to a huge repository of data would have little trouble providing useful and correct answers. But the mistake here is in treating these things as thinking machines.
That's understandable. A multi-billion dollar propaganda machine has been set up to sell you that lie.
In reality, LLMs are word prediction machines. They try to predict the words that would likely follow other words. They're really quite good at it. The underlying technology is extremely impressive, allowing them to approximate human conversation in a way that is quite uncanny.
But what you have to grasp is that you're not interacting with something that thinks. There isn't even an attempt to approximate a mind. Rather, what you have is a confabulation engine; a machine for producing plausible fictions. It does this by creating unbelievably huge matrices of words - literally operating in billions of dimensions at once, graphs with many times more axes than we have letters - and probabilistically associating them with each other. It's all very clever, but what it produces is 100% fake, made up, totally invented.
Now, because of the training data they've been fed, those made up answers will, depending on the question, sometimes ends up being right. For certain types of question they can actually be right quite a lot of the time. For other types of question, almost never. But the point is, they're only ever right by accident. The "AI" is always, always constructing a fiction. That fiction just sometimes aligns with reality.