Skip Navigation
Featured
Stubsack: weekly thread for sneers not worth an entire post, week ending 19th January 2025 - awful.systems
  • Paul I am begging you to actually write out a fucking timeline. Apparently woke started in the 80s in universities when the (white) civil rights protestors of the 70s got tenure in the 60s, as an inevitable and predictable extension of political correctness in the 90s. From the title you're obviously going to indulge the conservative fantasy that "wokeness" is a coherent thing rather than a political tool to dismiss calls for action to actually address blatant injustice. But if you're going to bullshit me, at least do it competently and have an internally consistent narrative that allows for the natural passage of time.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Even if that was true, they have to understand that other people exist and may take advantage of this, right? Like, even if you believe you and your friends are paragons of moral and intellectual virtue, the same law applies to villains and dumbasses.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • I remember from my misspent youth reading Scott's ramblings a fair bit of antipathy towards FDA regulations in particular. What I can only attribute to ignorance of history makes them fall prey to the standard libertarian talking points about slowing down drugs that could improve people's lives, never mind the fact that in the absence of those regulations everybody who could hypothetically benefit from psychedelic nootropics or whatever would have been too busy dealing with phocomelia to care.

  • Claud 3 is a bich
  • See, I feel like the one thing that Generative AI has been able to do consistently is to fool even some otherwise-reasonable people into thinking that there's something like a person they're talking to. One of the most toxic impacts that it's had on online discourse and human-computer interactions in general is by introducing ambiguity into whether there's a person on the other end of the line. On one hand, we need to wonder whether other posters on even this forum will Disregard All Previous Instructions. On the other hand, it's a known fact that a lot of these "AI" tools are making heavy use of AGI technologies - A Guy in India. Before the bubble properly picked up my wife got contracted to work for a company that claimed to offer an AI personal assistant. Her job would have literally been to be the customer's remote-working personal assistant. I like to think that her report to the regulators may have been part of what inspired these grifts to look internationally for their exploitable labor. I don't think I need to get into the more recent examples here of all forums.

    Obviously yelling at your compiler isn't going to lead to being an asshole to actual people any more than smashing a keyboard or cursing after missing a nail with a hammer. And to be fair most of the posters here (other than the drive-thrus) aren't exactly lacking in class consciousness or human decency or whatever you want to call it, so I'm probably preaching to the choir. But I do think there's a risk that injecting that ambiguity into the incidental relations we have with other people through our technologies (e.g. the chat window with tech support that could be a bot or a real agent depending on the stage of the conversation) is going to degrade the working conditions for a lot of real people, and the best way to avoid that is to set the norm that it's better to be polite to the robot if it's going to pretend to be a person.

  • Nvidia unveils its flagship RTX 5090 card — with AI-juiced frame rates
  • Gamers, for all their faults, have been pretty consistently okay on generative AI, at least in the cases I've seen. It doesn't hurt that nVidia keeps stapling features like this into hardware that supposedly improves performance but at the cost of breaking things and/or requiring more work from devs that are already being run ragged.

    Also, I can almost guarantee that the neural texture stuff they're talking about won't see enough use from developers to actually see improvements. Let's do a bunch more work to maybe get some memory savings on some of the highest-end hardware!

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Counterpoint: to what extent are hyperkludges actually a unique thing versus an aspect of how technologies and tools are integrated into human context? Like, one of the original examples is the TCP/IP stack, but as anyone who has had to wrangle multiple vendors can attest a lot of the value in that standardization necessarily comes from the network effects - the fact that it's an accepted standard. The web couldn't function if you had a bespoke protocol stack hand-made to elegantly handle the specific problems of a given application not just because of the difficulty in building that much software (i.e. network effects on the design and construction side) but because of how unwieldy and impractical it would be to get any of those applications in front of people. The fit of those tools for a given application is secondary to how much more cleanly the entire ecosystem can operate because they are more limited in number.

    The OP also talks about how embedded the history of a given problem is in the solution which feels like the central explanation for this trend. In that sense a hyperkludge isn't a unique pattern that some things fall into and more a way of indicating a particularly noteworthy whorl in the fractal infinikludge that is all human endeavors.

  • I regret to inform you that AI safety institutes are still on their bull shit
  • I've watched a few of those "I taught an AI to play tag" videos from some time back, and while its interesting to see what kinds of degenerate strategies the computer finds (trying to find a way out of bounds being a consistent favorite after enough iterations) it's always a case of "wow I screwed up in designing the environment or rewards" and not "dang, look how smart the computer is!"

    As always with this nonsense, the problem is always that the machine is too dumb to be trusted rather than too smart and powerful. Like, identifying patterns that people would miss is arguably the biggest strength of machine learning in general, but that's not the same as those patterns being meaningful or useful.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • This is my biggest gripe with that nonsense. If you make it hard to do something well, you won't end up with an elite series of uber-coders because there aren't enough of those people to do all the programming that people want to be done. Instead you'll see that much more software engineering done really goddamned badly and despite appearances at the time it turns out there is a maximum amount of shitty software the world can endure.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025
  • Surely it's better to specify those defaults in the config file and have the system just fail if the necessary flags aren't present. Having worked in support I can vouch for the amount of suffering that could be avoided if more systems actually failed if some important configuration isn't in place.

  • Claud 3 is a bich
  • While I think I get OP's point, I'm also reminded of our thread a few months back where I advised being polite to the machines just to build the habit of being respectful in the role of the person making a request.

    If nothing else you can't guarantee that your request won't be deemed tricky enough to deliver to a wildly underpaid person somewhere in the global south.

  • Shock as OpenAI’s Media Manager opt-out tool turns out to be vaporware
  • SomeBODY once told me

    The world was gonna roll me

    I'm only a stochastic parrot

    She was looking kinda dumb

    Drawing those extra thumbs

    And insisting that the L was on your head

    Well, the slop starts coming and it don't stop coming

    Steal all the books so you hit the ground running

    Didn't make sense but I still got funds

    Stole so much art but it still looks dumb

    So much to steal, not much for free

    So what's wrong with my copyright cheat

    You'll never know where your power flowed

    Just wait on my uranium glow!

    Hey now, you're a slop star

    Regulators get played

    Hey now, you're a great mark

    But Sam Altman got paid

    All that matters is growth

    And that journalists all get rolled

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 5th January 2025
  • Actually wait I'm pretty sure it's even worse because I'm terrible at reading logarithmic scales. It's roughly halfway between $1,000 and $10,000 on their log scale, which if I do the math while actually awake works out closer to $3,000.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 5th January 2025
  • Nobody outside the company has been able to confirm whether the impressive benchmark performance of OpenAI's o3 model represents a significant leap in actual utility or just a significant gap in the value of those benchmarks. However, they have released information showing that the most ostensibly-powerful model costs orders of magnitude more. The lede is in that first graph, which shows that for whatever performance gain o3 costs over ~$10 per request with the headline-grabbing version costing ~$1500 per request.

    I hope they've been able to identify a market willing to pay out the ass for performance that, even if it somehow isn't over hyped, is roughly equivalent to an average college graduate.

  • Lightcone needs your funding for LessWrong! because they had to give the FTX money back
  • You could argue that another moral of Parfit's hitchhiker is that being a purely selfish agent is bad, and humans aren't purely selfish so it's not applicable to the real world anyway, but in Yudkowsky's philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.

    I'm impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It's a manner of thinking that couldn't have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
    YourNetworkIsHaunted @awful.systems
    Posts 0
    Comments 537