Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 11th May 2025

awful.systems /post/4167045

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

140 comments
  • Here’s a fun one… Microsoft added copilot features to sharepoint. The copilot system has its own set of access controls. The access controls let it see things that normal users might not be able to see. Normal users can then just ask copilot to tell them the contents of the files and pages that they can’t see themselves. Luckily, no business would ever put sensitive information in their sharepoint system, so this isn’t a realistic threat, haha.

    Obviously Microsoft have significant resources to research and fix the security problems that LLM integration will bring with it. So much money. So many experts. Plenty of time to think about the issues since the first recall debacle.

    And this is what they’ve accomplished.

    https://www.pentestpartners.com/security-blog/exploiting-copilot-ai-for-sharepoint/

    • @rook @BlueMonday1984 wow. Why go to all the trouble of social engineering a company when you can just ask Copilot?

    • @rook @BlueMonday1984 Maybe they have asked CoPilot to write the code that restricts access for CoPilot?

      (Sometimes this future feels like 2001 A Space Odyssey, just as a farce. And without benevolent aliens.)

    • @rook @BlueMonday1984

      Thankfully I'm able to say "what is sharepoint?"

      I did meet it once. A client used it in their office. But when they wanted us offshore (via satellite link) to contribute to it, it became awfully unstable, probably because of latency/ unstable data links.

      It's M$. I doubt it has improved.

  • Amazon publishes Generative AI Adoption Index and the results are something! And by "something" I mean "annoying".

    I don't know how seriously I should take the numbers, because it's Amazon after all and they want to make money with this crap, but on the other hand they surveyed "senior IT decision-makers".. and my opinion on that crowd isn't the highest either.

    Highlights:

    • Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
    • The junk chart about "job roles with generative AI skills as a requirement". What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling "skills"? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
    • Cherry on top: one box to the left they list "limited understanding of generative AI skilling needs" as a barrier for "generative AI training". So yeah...
    • "CAIO". I hate that I just learned that.
  • Here's an interesting nugget I discovered today

    A long LW post tries to tie AI safety and regulations together. I didn't bother reading it all, but this passage caught my eye

    USS Eastland Disaster. After maritime regulations required more lifeboats following the Titanic disaster, ships became top-heavy, causing the USS Eastland to capsize and kill 844 people in 1915. This is an example of how well-intentioned regulations can create unforeseen risks if technological systems aren't considered holistically.

    https://www.lesswrong.com/posts/ARhanRcYurAQMmHbg/the-historical-parallels-preliminary-reflection

    You will be shocked to learn that this summary is a bit lacking in detail. According to https://en.wikipedia.org/wiki/SS_Eastland

    Because the ship did not meet a targeted speed of 22 miles per hour (35 km/h; 19 kn) during her inaugural season and had a draft too deep for the Black River in South Haven, Michigan, where she was being loaded, the ship returned in September 1903 to Port Huron for modifications, [...] and repositioning of the ship's machinery to reduce the draft of the hull. Even though the modifications increased the ship's speed, the reduced hull draft and extra weight mounted up high reduced the metacentric height and inherent stability as originally designed.

    (my emphasis)

    The vessel experiences multiple listing incidents between 1903 and 1914.

    Adding lifeboats:

    The federal Seamen's Act had been passed in 1915 following the RMS Titanic disaster three years earlier. The law required retrofitting of a complete set of lifeboats on Eastland, as on many other passenger vessels.[10] This additional weight may have made Eastland more dangerous by making her even more top-heavy. [...] Eastland's owners could choose to either maintain a reduced capacity or add lifeboats to increase capacity, and they elected to add lifeboats to qualify for a license to increase the ship's capacity to 2,570 passengers.

    So. Owners who knew they had an issue with stability elected profits over safety. But yeah it's the fault of regulators.

  • More big “we had to fund, enable, and sane wash fascism b.c. the leftist wanted trans people to be alive” energy from the EA crowd. We really overplayed our hand with the extremist positions of Kamala fuckin Harris fellas, they had no choice but to vote for the nazis.

    (repost since from that awkward time on Sunday before the new weekly thread)

140 comments