YourNetworkIsHaunted @ YourNetworkIsHaunted @awful.systems Posts 0Comments 611Joined 11 mo. ago
We then push forward into an unknown world, supporting each other, doing enough trade and consulting with the outside world to ensure a positive trade surplus for the collective and distributing the profits among us to sustain the community.
That's just a commune. Like my brother in Christ I think you're just talking about communism.
In lighter news, has anyone else noticed that it's necessary for any kind of Cybersecurity course to open with what is effectively a Tumblr-style DNI for unethical hackers? Like, I'm not criticizing exactly and I certainly don't have any better ideas to prevent people using these skills for evil, but the disclaimer up top fits into a certain kind of pattern that I, for one, find hilarious.
Try telling it to pretend to be Nancy Pelosi and see if that helps make it more consistent.
When did you read Ulysses that you hadn't read Dickens? I know that the "I got paid by the word and you can tell" prose isn't for everyone but isn't Joyce one of the most notoriously impenetrable writers in the English language? Seems like in most cases there would be an opposite progression, unless you're one of those people.
So cards on the table here, I've never actually read Oliver Twist. But even neo-google is able to point me at enough useful details to get enough of a gist to follow it.
And that's assuming you don't pick it up from Wishbone, the animated talking dogs version , or the muppets parody that I'm sure exists somewhere.
Today in "propaganda I didn't think I needed to worry about" - Cybertruck kids books!
And another one! This one actually has a good title in "The Ugly Truckling" and I'm legitimately mad as the father of a truck-obsessed child that it's wasted here.
Also, I don't know that people are particularly concerned about the left/right spectrum as much as the explicitly racist and tacitly authoritarian sentiments. Like, if your vision of "the left" includes Scott, AOC, and Karl Marx then you have basically defined the left/right spectrum to be meaningless.
Thank you for implicitly reminding me to take my ADD meds.
On one hand this is true. On the other hand, I can absolutely buy that nobody in silicon valley was particularly trying to optimize for costs when they had access to more VC money than God.
Assuming deepseek can actually be run locally you would just need a laptop, a dynamo, and the poetic edda to use as the installation prompt.
It definitely has that old sci-fi weirdness to it, but the earliest edition I'm seeing on goodreads was in '03.
But think of how high the number can go!
I mean, we tried the whole "fuck yeah grids fuck local geography" thing. That was fucking Le Corbusier and friends' whole deal. And it created dead cities and/or places in cities that people hated to live in.
You know I was wondering about where the name came from and it's sufficiently plausible that I believe it. Notably in the story her threat - the reason just being around her is so dangerous - is because she has some kind of perfect predictive ability on top of all the giant psychic kaiju nonsense. So she attacks a city and finds the one woman who needs to die in order for her super-thinker husband to go mad and build an army of evil robots or whatever.
It very much rhymes with the Rationalist image of a malevolent superintelligence and I can definitely understand it being popular in those circles, especially the "I'm too edgy to recognize that Taylor is wrong, actually" parts of the readership.
However, I do think that the unfolding of this story presents an object lesson in why "always escalate to the max" is a wildly stupid idea. It turns out that even when you have guns (metaphorical or otherwise) and a complete disregard for the consequences of failure the average group of citizens is still at a decided disadvantage to the state at higher points on the escalation ladder.
From what I've been able to piece together from the various theological disputes people have had with the murder cult it seems like the only two differences are that Ziz and friends are much more committed to nonhuman animal welfare than the average rat and that they have decided that the correct approach to conflict is always to escalate. This makes them more aggressive about basically everything which looks like a much deeper ideological gap than there actually is. I'm not going to evaluate whether these are reasonable conclusions to take from the same bizarre set of premises that lead to Roko's Basilisk being a concern.
Man, after a long day this is the exact story I needed. Doing a vital public service as always, David.
To be fair, the highest-level claims of just about any conspiracy theory sound at least plausible. Even Qanon tends to start off with claims that are basically confirmed by the Epstein case before they start extending the conspiracy to more places, incorporating Jesus, and excluding their preferred Messiah figures.
OpenAI can't simply "add on" DeepSeek to its models, if not just for the optics. It would be a concession. An admittal that it slipped and needs to catch up, and not to its main rival...
I actually disagree here. I think Ed underestimates how craven and dishonest these people are. I expect they'll try to quietly integrate any efficiency improvements they can get from it and bluster through any investor questions about it. Their hope at this point has to be that more hardware is still better and that scaling is still gonna be the thing to make fetch happen. This again isn't a revolutionary new structure, even if it is a significant improvement over anything Saltman and co have been doing.
This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.
How many times are AI people going to re-learn that LLMs don't have "awareness" or "reasnloning' in a sense humans would find meaningful?