Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IM
Posts
0
Comments
61
Joined
7 mo. ago

  • It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee's stocks are tied to how scared everyone is.

    Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?

    Strange hysteria like this doesn't need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.

  • Short story: it's smoke and mirrors.

    Longer story: This is now how software releases work I guess. Alot is running on open ai's anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there's no more training data. So the next trick is that for their next batch of models they have "solved" various problems that people say you can't solve with LLMs, and they are going to be massively better without needing more data.

    But, as someone with insider info, it's all smoke and mirrors.

    The model that "solved" structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it's a price optimization afaik).

    The next large model launching with the new Q* change tomorrow is "approaching agi because it can now reliably count letters" but actually it's still just agents (Q* looks to be just a cost optimization of agents on the backend, that's basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they're so confident in this model that they don't run the resulting python themselves. It's still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um... checks notes count the number of letters in a sentence.

    But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

    Expect more of this around GPT-5 which they promise "Is so scary they can't release it until after the elections". My guess? It's nothing different, but they have to create a story so that true believers will see it as something different.

  • The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I've ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don't.

    Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.

  • Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think "Effective" means anyways.

  • I'm ok with this because everytime Nick Bostrom's name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, "What the fuck is this shit? And these people are associated with him? Fuck that."

  • It can't stop the usage, it can raise the cost of doing so, by bringing in legal risk of operations operating in a public way. It can create precedence that can be built upon by other parts.

    Politics and law move slower than and behind the things it attempts to regulate by design. Which is good, the atlernative is a surveilance state! But it definitely can arrange itself to punish or raise the risk profile of doing something in a certain patterned way.

  • Honestly, almost anything can work. Some, sort of flash card system, and some, sort of input in the language that you enjoy. I use Anki and yes it's trash but I have never found spending anymore than the least necessary time on the tech of language learning worth it.

    The crucial thing, in my experience, is that language acquisition only works if you're paying attention because you actually care about the material in front of you. I think a lot of people make the mistake of only studying aspirationally and well beyond their current capacity, forgetting how to be a child and be highly curative and explorative. Weird shit, even practically unuseful shit, is surprisingly better than you'd think.

  • Fwiw, this is also why I -do- think it's important to talk more frankly about where science is moving towards ala things like FEP or scale free dynamics. An alternative story on things like what energy, computation, and participation really means, is useful, not for prescribing the future, but the opposite: putting ambiguity and the importance of participation back in it.

    The current world view, that some how things are cleanly separated and in nice little ontological boxes of capability and shape and form, lead to closed systems delusions. It's fragile and we know it, I hope. Von Neuman's "last invention" is wrong because most, unfortunately, most "smart people's" view of intelligence has become reductive in liu of a bigger picture.

    In addition to our sneers, we should want to tell a more robust story about all of these things.

  • A certain class of idealists definitely feel this way, and it's why many decentralized efforts are fragile and fall apart. Because they can't meaningfully construct something without centralization or owners, they end up just hiding these things under a blanket rather than acknowledging them as design elements that require an intentional specification.

  • I tend to agree. "No gods, no masters, no admins!" should never mean no assembly and no organization around constraints. Admins jobs isn't just to be capricious. Admins are there to set a tone and maintain it. There are places for random group chats of noise but honestly, pruning, as in gardening, is how you maintain organization. It doesn't feel great to be on the end of pruning but like seriously it should rarely be taken personally when we're talking about something like social media.

  • It’s just looking for a God or an afterlife without turning to religion.

    Yes. Because they sneered so hard at /other/ things creating and living in their own meaning, the sneer came full circle, and they find themselves in a simulated jail being sneered at by things that sneer at things that create and live in their own meaning.

    Basically, they looked in the mirror and sneered.

  • Oh absolutely! This is the entire delusion collapsing on itself.

    Bro, if intelligence is, as the cult claims, fully contained self improvement, --you could never have mattered by definition--. If the system is closed, and you see the point of convergence up ahead... what does it even fucking matter?

    This is why Pascal's wager defeats all forms of maximal utilitarianism. Again, if the system is closed around a set of known alternatives, then yes. It doesn't matter anymore. You don't even need intelligence to do this. You can do with sticks and stones by imagining away all the other things.

  • It's the same story as has ever been. "Smart People"'s position on anything is often informed by their current economic relationship wrt to the things they care about. And maybe even Yud isn't super happy about his profession being co-opted. What scraps will he have if his own delusions became true about GPT zombies replacing "authentic voices"?

    No one is immune to seeing a better take when it's their shit on the line, and no is immune from being in a bubble without stake.

  • Yeah, that's a good call out, I do feel the meta is good obsession is borderline definitely cultish.

    There's a big difference between a committed scientists doing emperical work on specific mechanisms saying something like "wow, isn't it cool how considering a broader perspective of how unrelated parts work together to create this newly discovered set of specifics?" and someone who is committed anti-institutional saying "see how by me taking your money and offering vague promises of immortal we are all enriched?"

  • Why so general? The multi-agent dynamical systems theory needed to heal internal conflicts such as auto-immune disorders may not be so different from those needed to heal external conflicts as well, including breakdowns in social and political systems.

    This isn't, an answer to the question why so general? This is aspirational philosophical goo. "multi-agent dynamical systems theory" => you mean any theory that takes composite view of a larger system? Like Chemistry? Biology?Physics? Sociology? Economics? "Why so general" may as well be "why so uncommitted?"

    I feel bayesian rationalism has basically missed the point of inference and immediately fallen into the regression to the mean trap of "the general answer to any question shouldn't say anything in particular at all."

  • Maybe hot take, but I actually feel like the world doesn't need strictly speaking more documentation tooling at all, LLM / RAG or otherwise.

    Companies probably actually need to curate down their documents so that simpler thinks work, then it doesn't cost ever increasing infrastructure to overcome the problems that previous investment actually literally caused.