Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
I’m going to start replying to everything like I’m on Hacker News. Unhappy with Congress? Why don’t you just start a new country and write a constitution and secede? It’s not that hard once you know how. Actually, I wrote a microstate in a weekend using Rust.
App developers think that’s a bogus argument. Mr. Bier told me that data he had seen from start-ups he advised suggested that contact sharing had dropped significantly since the iOS 18 changes went into effect, and that for some apps, the number of users sharing 10 or fewer contacts had increased as much as 25 percent.
aww, does the widdle app's business model collapse completely once it can't harvest data? how sad
this reinforces a suspicion that I've had for a while: the only reason most people put up with any of this shit is because it's an all or nothing choice and they don't know the full impact (because it's intentionally obscured). the moment you give them an overt choice that makes them think about it, turns out most are actually not fine with the state of affairs
Example: the article Leninist historiography was entirely written by AI and previously included a list of completely fake sources in Russian and Hungarian at the bottom of the page.
As previously mentioned, the "Behind the Bastards" podcast is tackling Curtis Yarvin. I'm just past the first ad intermission (why are all podcast ads just ads for other podcasts? It's like podcast incest), and according to the host, Yarvin models his ideal society on Usenet pre-Eternal September.
This is something I've noticed too (I got on the internet just before). There's a nostalgia for the "old" internet, which was supposed to be purer and less ad-infested than the current fallen age. Usenet is often mentioned. And I've always thought that's dumb because the old internet was really really exclusionary. You had to be someone in academia or internet business, so you were Anglophone, white, and male. The dream of the old pure internet is a dream of an internet without women or people of color, people who might be more expressive in media other than 7 bit ASCII.
This was a reminder that the nostalgia can be coded fascist, too.
My current hyperfixation is Ecosia, maker of “the greenest search engine” (already problematic) implementing a wrapped-chatgpt chat bot and saying it has a “green mode” which is not some kind of solar-powered, ethically-sound, generative AI, but rather an instructive prompt to only give answers relating to sustainable business models etc etc.
So I'm guessing what happened here is that the statistically average terminal session doesn't end after opening an SSH connection, and the LLM doesn't actually understand what it's doing or when to stop, especially when it's being promoted with the output of whatever it last commanded.
Shlegeris said he uses his AI agent all the time for basic system administration tasks that he doesn't remember how to do on his own, such as installing certain bits of software and configuring security settings.
Ex-headliners Evergreen Terrace: "Even after they offered to pull Kyle from the event, we discovered several associated entities that we simply do not agree with"
the new headliner will be uh a Slipknot covers band
organisers: "We have been silent. But we are prepping. The liberal mob attempted to destroy Shell Shock. But we will not allow it. This is now about more than a concert. This is a war of ideology." yeah you have a great show guys
(This would've been more shocking to me in 2023, but after over a year in this bubble I have stopped expecting anything resembling basic human decency from those who work in AI)
"if you can't beat 'em, join 'em" but the wrong way around. I guess they got tired of begging google for money?
And, for the foreseeable future at least, advertising is a key commercial engine of the internet
this tracks analogously to something I've been saying for a while as well, but with some differences. one of the most notable is the misrepresentation here of "the internet", in the stead of "all the entities playing the online advertising game to extract from everyone else"
Also got a quick sidenote, which spawned from seeing this:
This is pure gut feeling, but I suspect that "AI training" has become synonymous with "art theft/copyright infringement" in the public consciousness.
Between AI bros publicly scraping against people's wishes (Exhibit A, Exhibit B, Exhibit C), the large-scale theft of data which went to produce these LLMs' datasets, and the general perception that working in AI means you support theft (Exhibit A, Exhibit B), I wouldn't blame Joe Public for treating AI as inherently infringing.
Small FYI, not a sneer or anything, you can stop reading if you don't know what the godotengine is. But if you do and hear of the fork, you can just ignore the fork. (the people involved also seem to be rather iffy, one guy who went crazy after somebody mentioned they would like gay relationships in his game, and some maga conspiracy theory style coder. That is going by the 3 normal people the account follows (out of 5) who I assume are behind it).
So, today MS publishes this blog post about something with AI. It starts with "We’re living through a technological paradigm shift."... and right there I didn't bother reading the rest of it because I don't want to expose my brain to it further.
This exchange on HN, from the Wordpress meltdown, is going to make an amazing exhibit in the upcoming trial:
Anonymous: Matt, I mean this sincerely: get yourself checked out. Do you have a carbon monoxide detector in your house? … Go to a 10 day silent retreat, or buy a ranch in Montana and host your own Burning Man…
Matt Mullenweg: Thanks, I carry a co2 and carbon monoxide monitor. … I do own a place in Montana, and I meditate several times a day.
so, I've always thought that blind's "we'll verify your presence by sending you shit on your corp mail" (which, y'know, mail logs etc....) is kinda a fucking awful idea. but!
I didn't realize I was still signed up to emails from NanoWrimo (I tried to do the challenge a few years ago) and received this "we're sorry" email from them today. I can't really bring myself to read and sneer at the whole thing, but I'm pasting the full text below because I'm not sure if this is public anywhere else.
spoiler
Supporting and uplifting writers is at the heart of this organization. One priority this year has been a return to our mission, and deep thinking about what is in-scope for an organization of our size.
National Novel Writing Month
To Our NaNoWriMo Community:
There is no way to begin this letter other than to apologize for the harm and confusion we caused last month with our comments about Artificial Intelligence (AI). We failed to contextualize our reasons for making this statement, we chose poor wording to explain some of our thinking, and we failed to acknowledge the harm done to some writers by bad actors in the generative AI space. Our goal at the time was not to broadcast a comprehensive statement that reflected our full sentiments about AI, and we didn’t anticipate that our post would be treated as such. Earlier posts about AI in our FAQs from more than a year ago spoke similarly to our neutrality and garnered little attention.
We don’t want to use this space to repeat the content of the full apology we posted in the wake of our original statements. But we do want to raise why this position is critical to the spirit—and to the future—of NaNoWriMo.
Supporting and uplifting writers is at the heart of what we do. Our stated mission is “to provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds—on and off the page”. Our comments last month were prompted by intense harassment and bullying we were seeing on our social media channels, which specifically involved AI. When our spaces become overwhelmed with issues that don’t relate to our core offering, and that are venomous in tone, our ability to cheer on writers is seriously derailed.
One priority this year has been a return to our mission, and deep thinking about what is in-scope for an organization of our size. A year ago, we were attempting to do too much, and we were doing some of it poorly. Though we admire the many writers’ advocacy groups that function as guilds and that take on industry issues, that isn’t part of our mission. Reshaping our core programs in ways that are safe for all community members, that are operationally sound, that are legally compliant, and that are mission-aligned, is our focus.
So, what have we done this year to draw boundaries around our scope, promote community safety, and return to our core purpose?
We ended our practice of hosting unrestricted, all-ages spaces on NaNoWriMo.org and made major website changes. Such safety measures to protect young Wrimos were long overdue.
We stopped the practice of allowing anyone to self-identify as an educator on our YWP website and contracted an outside vendor to certify educators. We placed controls on social features for young writers and we’re on the brink of relaunch.
We redesigned our volunteer program and brought it into legal compliance. Previously, none of our ~800 global volunteers had undergone identity verification, background checks, or training that meets nonprofit standards and that complies with California law. We are gradually reinstating volunteers.
We admitted there are spaces that we can’t moderate. We ended our policy of endorsing Discord servers and local Facebook groups that our staff had no purview over. We paused the NaNoWriMo forums pending serious overhaul. We redesigned our training to better-prepare returning moderators to support our community standards.
We revised our Codes of Conduct to clarify our guidelines and to improve our culture. This was in direct response to a November 2023 board investigation of moderation complaints.
We proactively made staffing changes. We took seriously last year’s allegations of child endangerment and other complaints and inspected the conditions that allowed such breaches to occur. No employee who played a role in the staff misconduct the Board investigated remains with the organization.
Beyond this, we’re planning more broadly for NaNoWriMo’s future. Since 2022, the Board has been in conversation about our 25th Anniversary (which we kick off this year) and what that should mean. The joy, magic, and community that NaNoWriMo has created over the years is nothing short of miraculous. And yet, we are not delivering the website experience and tools that most writers need and expect; we’ve had much work to do around safety and compliance; and the organization has operated at a budget deficit for four of the past six years.
What we want you to know is that we’re fighting hard for the organization, and that providing a safer environment, with a better user interface, that delivers on our mission and lives up to our values is our goal. We also want you to know that we are a small, imperfect team that is doing our best to communicate well and proactively. Since last November, we’ve issued twelve official communications and created 40+ FAQs. A visit to that page will underscore that we don’t harvest your data, that no member of our Board of Directors said we did, and that there are plenty of ways to participate, even if your region is still without an ML.
With all that said, we’re one month away! Thousands of Wrimos have already officially registered and you can, too! Our team is heads-down, updating resources for this year’s challenge and getting a lot of exciting programming staged and ready. If you’re writing this season, we’re here for you and are dedicated, as ever, to helping you meet your creative goals!
what's the over/under on the spruce pine thing causing promptfondlers and their ilk to suddenly not be able to get chips, and then hit a(n even more concrete) ceiling?
(I know there may be some of the stuff in stockpiles awaiting fabrication, but still, can't be enough to withstand that shock)
Two Harvard students recently revealed that it's possible to combine Meta smart glasses with face image search technology to "reveal anyone's personal details," including their name, address, and phone number, "just from looking at them."
In a Google document, AnhPhu Nguyen and Caine Ardayfio explained how they linked a pair of Meta Ray Bans 2 to an invasive face search engine called PimEyes to help identify strangers by cross-searching their information on various people-search databases. They then used a large language model (LLM) to rapidly combine all that data, making it possible to dox someone in a glance or surface information to scam someone in seconds—or other nefarious uses, such as "some dude could just find some girl’s home address on the train and just follow them home,” Nguyen told 404 Media.
This is all possible thanks to recent progress with LLMs, the students said.
Putting my off-the-cuff thoughts on this:
Right off the bat, I'm pretty confident AR/smart glasses will end up dead on arrival - I'm no expert in marketing/PR, but I'm pretty sure "our product helped someone dox innocent people" is the kind of Dasani-level disaster which pretty much guarantees your product will crash and burn.
I suspect we're gonna see video of someone getting punched for wearing smart glasses - this story's given the public a first impression of smart glasses that boils down to "this person's a creep", and its a lot easier to physically assault someone wearing smart glasses than some random LLM
Maybe not the right place, not really a sneer but anyways. The Smile (aka Yorke & Greenwood from Radiohead) made a music video with StableDiffusion and I’m pretty bummed out. 😔
I might be wrong but this sounds like a quick way to make the web worse by putting a huge computational load on your machine for the purpose of privacy inside customer service chat bots that nobody wants. Please correct me if I’m wrong
WebLLM is a high-performance in-browser LLM inference engine that brings language model inference directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including streaming, JSON-mode, function-calling (WIP), etc.
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
Hopefully this doesn’t break the rules. But where can I find some educational podcasts that aren’t overly capitalist, reactionary, rationalist, or otherwise right-leaning or authoritarian in nature.
I want to specifically avoid content like Lex Friedman, Huberman, Joe Rogan, Sam Harris. That sounds good on the surface but goes down a rabbit hole of affirming reactionary bias.
I’m not amazing with words, so I hope what I’m saying makes sense. Thanks.
I'm in the other camp: I remember when we thought an AI capable of solving Go was astronomically impossible and yet here we are. This article reads just like the skeptic essays back then.
Ah yes my coworkers communicate exclusively in Go games and they are always winning because they are AI and I am on the street, poor.
There's not that much else to sneer at though, plenty of reasonable people.
I got this AMAZING OPPORTUNITY in my inbox, because once your email appears on a single published paper you're forever doomed to garbage like this (transcript at the end):
Highlights:
Addresses me as Dr. I'm not a doctor. I checked, and apparently Dr. Muhhamad Imran Qureshi indeed has a PhD and is a lecturer at Teesside University International Business School (link to profile). His recent papers include a bunch of blockchain bullshit. Tesside University appears to be a legit UK university, although I'm not sure how legit the Business School is (or how legit any Business School can be, really).
Tells us their research is so shit that using wisdom woodchippers actually increases their accuracy.
One of the features is "publication support", so this might be one of those scams where you pay an exorbitant fee to get "published" in some sketchy non-peer-reviewed journal.
One of the covered AI tools is Microsoft Excel. If you were wondering if "AI" had any meaning.
Also, by god, are there so many different ChatGPT clones now? I haven't heard most of those names. I kinda hope they're as AI as Excel is.
I'm not sure which would be worse, this being a scam, or them legit thinking this brings value to the world and believing they're helping anyone.
transcript
Email titled Revolutionize Your Research: AI-Powered Systematic Literature Review Master Class
Online course on writing
AI-Powered Systematic Literature Review
Register Now:
Dear Dr. [REDACTED],
we're reaching out because we believe our AI-Powered Systematic Review Masterclass could be a game-changer for your research. As someone who's passionate about research writing, we know the challenges of conducting thorough and efficient systematic reviews.
Key takeaways:
AI-powered prompt engineering for targeted literature searches
Crafting optimal research questions for AI analysis
Intelligent data curation to streamline your workflow
Leveraging AI for literature synthesis and theory development
Join our Batch 4 and discover how AI can help you:
Save time by automating repetitive tasks
Improve accuracy with AI-driven analysis
Gain a competitive edge with innovative research methods
Enrollment is now open! Don't miss this opportunity to take your systematic review skills to the next level.
Key Course Details:
Course Title: AI-Powered Systematic Literature Reviews Master Class
Live interaction + recording = Learning that fits your life
Dates: October 13, 2024, to November 3, 2024
Live Session Schedule: Every Sunday at 2 PM UK time (session recordings will be accessible).
Duration: Four Weeks
Platform: Zoom
Course Fee: GBP 100
Certification: Yes
Trainer: Dr. Muhammad Imran Qureshi
Key features
Asynchronous learning
Video tutorials
Live sessions with access to recordings
Research paper Templates
Premade Prompts for Systematic Literature Review
Exercise Files
Publication support
The teaching methodology will offer a dynamic learning experience, featuring live sessions every Saturday via Zoom for a duration of four weeks. These sessions will provide an interactive platform for engaging discussions, personalised feedback, and the opportunity to connect with both the course instructor and fellow participants.
Moreover, our diverse instructional approach encompasses video tutorials, interactive engagements, and comprehensive feedback loops, ensuring a well-rounded and immersive learning experience.
Certification
Upon successful completion of the course, participants will receive certification from the Association of Professional Researchers and Academicians UK, validating their mastery of AI-enabled methodologies for conducting comprehensive and insightful literature reviews.
been feeling this for a while too and wondering how to put it into words. especially in light of all the techfash, pressing climate and general market problems, etc
one of the things I've been holding onto (hoping in?) is my estimation/belief that I don't think the current state of all the deeply-fucked systems is inherently stable, or viable. as I've said here before, that very instability is part of why so many of them are engaged in trying to set things up to protect those self-same systems, as they know the swingback is coming and they want to make it as hard as possible to claw things back from them
but how long until it breaks, and with how much splash damage, are things I haven't really been able to estimate
Folks, I need some expert advice. Thanks in advance!
Our NSF grant reviews came in (on Saturday), and two of the four reviews (an Excellent AND a Fair, lol) have confabulations and [insert text here brackets like this] that indicate that they are LLM generated by lazy people. Just absolutely gutted. It's like an alien reviewed a version of our grant application from an parallel dimension.
Who do I need to contact to get eyes on the situation, other than the program director? We get to simmer all day today since it was released on the weekend, so at least I have an excuse to slow down and be thoughtful.
A redditor has a pinned post on /r/technology. They claim to be at a conference with Very Important Promptfondlers in Berlin. The OP feels like low-effort guerilla marketing, tbh; the US will dominate the EU due to an overwhelming superiority in AI, long live the new flesh, Emmanuel Macron is on board so this is SUPER SERIOUS, etc.
PS: the original poster, /u/WillSen, self-identifies as CEO of a bootcamp/school called "codesmith," and has lots of ideas about how to retrain people to survive in the longed-for post-AI hellscape. So yeah, it's an ad.
The central problem of 21st century democracy will be finding a way to inoculate humanity against confident bullshitters. That and nature trying to kill us. Oh, and capitalism in general, but I repeat myself.
So the ongoing discourse about AI energy requirements and their impact on the world reminded me about the situation in Texas. It set me thinking about what happens when the bubble pops. In the telecom bubble of the 90s or the British rail bubble of the 1840s, there was a lot of actual physical infrastructure created that outlived the unprofitable and unsustainable companies that had built them. After the bubble this surplus infrastructure helped make the associated goods and services cheaper and more accessible as the market corrected. Investors (and there were a lot of investors) lost their shirts, but ultimately there was some actual value created once we were out of the bezzle.
Obviously the crypto bubble will have no such benefits. It's not like energy demand was particularly constrained outside of crypto, so any surplus electrical infrastructure will probably be shut back down (and good riddance to dirty energy). The mining hardware itself is all purpose-built ASICs that can't actually do anything apart from mining, so it's basically turning directly into scrap as far as I can tell.
But the high-performance GPUs that these AI operations rely on are more general-purpose even if they're optimized for AI workloads. The bubble is still active enough that there doesn't appear to be much talk about it, but what kind of use might we see some of these chips and datacenters put to as the bubble burns down?