Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 20 October 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Today I was looking at buying some stickers to decorate a laptop and such, so I was browsing Redbubble. Looking here and there I found some nice designs and then stumbled upon a really impressive artist portfolio there. Thousands of designs, woah, I thought, it must have been so much work to put that together!
Then it dawned on me. For a while I had completely forgotten that we live in the age of AI slop... blissfull ignorance! But then I noticed the common elements in many of the designs... noticed how everything is surrounded by little dots or stars or other design trinkets. Such a typical AI slop thing, because somehow these "AI" generators can't leave any whitespace, they must fill every square millimeter with something. Of course I don't know for sure, and maybe I'm doing an actual artist injustice with my assumption, but this sure looked like Gen-AI stuff...
Anyway, I scrapped my order for now while I reconsider how to approach this. My brain still associates sites like redbubble or etsy with "art things made by actual humans", but I guess that certainty is outdated now.
This sucks so much. I don't want to pay for AI slop based on stolen human-created art - I want to pay the actual artists. But now I can never know... How can trust be restored?
As humanity gets closer to Artificial General Intelligence (AGI)
The first clause of the opening line, and we've already hit a "citation needed".
He goes from there to taking a prediction market seriously. And that Aschenbrenner guy who thinks that Minecraft speedruns are evidence that AI will revolutionize "science, technology, and the economy".
You know, ten or fifteen years ago, I would have disagreed with Tegmark about all sorts of things, but I would have granted him default respect for being a scientist.
He's also just dropped a thorough teardown of the tech press for their role in enabling Silicon Valley's worst excesses. I don't have a fitting Kendrick Lamar reference for this, but I do know a good companion piece: Devs and the Culture of Tech, which goes into the systemic flaws in tech culture which enable this shit.
Molly White reports on Kamala Harris's recent remarks about Cryptocurrency being a cool opportunity for black men.
VP Harris's press release (someone remind me to archive this once internet archive is up). Most of the rest of it is reasonable, but it paints cryptocurrency in a cautiously positive light.
Supporting a regulatory framework for cryptocurrency and other digital assets so Black men who invest in and own these assets are protected
[...]
Enabling Black men who hold digital assets to benefit from financial innovation.
More than 20% of Black Americans own or have owned cryptocurrency assets. Vice President Harris appreciates the ways in which new technologies can broaden access to banking and financial services. She will make sure owners of and investors in digital assets benefit from a regulatory framework so that Black men and others who participate in this market are protected.
Overall there has been a lot of cryptocurrency money in this US election on both sides of the aisle, which Molly White has also reported extensively on. I kind of hate it.
"regulation" here is left (deliberately) vague. Regulation should start with calling out all the scammers, shutting down cryptocurrency ATMs, prohibiting noise pollution, and going from there; but we clearly don't live in a sensible world.
Over on /r/politics, there are several users clamoring for someone to feed the 1900 page independent counsel report into an LLM, which is an interesting instance of second-order laziness.
They also seem convinced that NotebookLLM is incapable of confabulation which is hilarious and sad. Could it be sneaky advertising?
hello, as a year 12 student who just did the first english exam, i was genuinely baffled seeing one of the stimulus texts u have to analyse is an AI IMAGE. my friend found the image of it online, but that’s what it looked like
for a subject which tells u to “analyse the deeper meaning”, “analyse the composer’s intent”, “appreciate aesthetic and intellectual value” having an AI image in which you physically can’t analyse anything deeper than what it suggests, it’s just extremely ironic 😭 idk, [as an artist who DOESNT use AI]* i might have a different take on this since i’m an artist, what r ur thoughts?
*NB: original post contains the text: "as an artist using AI images" but this was corrected in a later comment:
also i didn’t read over this after typing it out but, meant to say, “as an artist who DOESNT use AI”
Like Vitalik Buterin creating eth because he was mad his op WoW char got nerfed, we now have more gamers lore. J D Vance played a Yawgmoth's Bargain deck.
OpenAI's revenue isn't from advertising, it should be slightly easier for them to resist the call of enshittification this early in the company history.
Can't enshittify that which is already shit
Twice in the last week I've had Claude refuse to answer questions about a specific racial separatist group (nothing about their ideology, just their name and facts about their membership) and questions about unconventional ways to assess job candidates. Both times I turned to ChatGPT and it gave me an answer immediately
Just a normal hackernews, testing if the models they use are racist
Well, at this point most new data being created is conversations with chatgpt, seeing as how stack overflow and reddit are increasingly useless, so their conversation logs are their moat.
saw this via a friend earlier, forgot to link. xcancel
socmed administrator for a conf rolls with liarsynth to "expand" a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker
the mindset of "just make some shit to pass muster" obviously shines through in a lot of promptfans and promptfondlers, and while that's fucked up I don't want to get too stuck on that now. one of the things I've been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. "yeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formal" expressed and traceable
even if just as a thought experiment (because of course there's lots of funky practical problems, combined with the "humans just don't really exist that way" effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms
(e: apologies for semi stream of thought, it's late and i'm tired)
it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money
New alignment offer: I guess some people were sad they missed the last window. Some have been leaking to the press and ex-employees. That's water under the bridge. Maybe the last offer needed to be higher. People have said they want a new window, so this is my attempt. Here's a new one: You have until 00:00 UTC Oct 17 (-4 hours) to DM me the words, ‘I resign and would like to take the 9-month buy-out offer’ You don't have to say any reason, or anything else. I will reply ‘Thank you.’ Automattic will accept your resignation, you can keep you [sic] office stuff and work laptop; you will lose access to Automattic and Wong (no slack, user accounts, etc). HR will be in touch to wrap up details in the coming days, including your 9 months of compensation, they have a lot on their plates right now. You have my word this deal will be honored. We will try to keep this quiet, so it won't be used against us, but I still wanted to give Automatticians another window.
there’s a (mid) joke here about how a boy who’s obsessed with photography really should understand more about optics
and who wouldn’t trust their livelihood in a difficult job market to a promise from a very stable genius like matt, who will destroy you financially if he thinks you talked to the press:
After an exodus of employees at Automattic who disagreed with CEO Matt Mullenweg’s recently divisive legal battle with WP Engine, he’s upped the ante with another buyout offer—and a threat that employees speaking to the press should “exit gracefully, or be fired tomorrow with no severance.”
The full piece is worth a read, but the conclusion's pretty damn good, so I'm copy-pasting it here:
All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.
i am hearing that ProQuest has been quietly contacting small publishers to see if it can ingest their published output for AI training.
ProQuest has an AI thing now, but it's denied it's training on hosted content ... yet.
if you are, or know, an author who's had a letter of this sort recently, mentioning ProQuest or no, i'd love to know and please tell your friends - email is dgerard@gmail.com
New piece from The Atlantic: The Age of AI Child Abuse is Here, which delves into a large-scale hack of Muah.AI and the large-scale problem of people using AI as a child porn generator.
And now, another personal sidenote, because I cannot stop writing these (this one's thankfully unrelated to the article's main point):
The idea that "[Insert New Tech] Is Inevitabletm" (which Unserious Academic interrogated in depth BTW) took a major blow when NFTs crashed and burned in full view of the public eye and got rapidly turned into a pop-culture punchline.
That, I suspect, is helping to fuel the large scale rejection of AI and resistance to its implementation - Silicon Valley's failure to make NFTs a thing has taught people that Silicon Valley can be beaten, that resistance is anything but futile.
As Nina Power was mentioned before, here is an article on a Welsh 'druid'/forger which touches that subject (and Marx) a bit. People might find it an interesting read.
Anti-Woke Druids and Radical Bards - 'What links Welsh 18th century romantic Druid-Bards, gathering around a circle of pebbles in North London, and the contemporary online right?'