Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you'll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

203 comments
  • thinking about how I was inoculated against part of ai hype bc a big part of my social circle in undergrad consisted of natural language processing people. they wanted to work at places with names like "OpenAI" and "google deepmind," their program was more or less a cognitive science program, but I never once heard any of them express even the slightest suspicion that LLMs of all things were progressing toward intelligence. it would have been a nonsequiter.

    also from their pov the statistical approach to machine learning was defined by abandoning the attempt to externalize the meaning of text. the cliche they used to refer to this was "the meaning of a word is the context in which it occurs."

    finding out that some prestigious ai researchers are all about being pilled on immanetizating agi was such a swerve for me. it's like if you were to find out that michio kaku has just won his fourth consecutive nobel prize in physics

    • it’s like if you were to find out that michio kaku has just won his fourth consecutive nobel prize in physics

      hell of a stinger

  • No, all you lawyers explaining to me how the practice of law works in the U.S., you would totally benefit from GPT. Complete with bonus:

    • Everyone explaining to me that lawyers actually read all the documents in discovery is really trying to explain to me, a computer scientist with 20 years of experience[1], how GPT works!
    • [1] Does OP have actual tech expertise? The answer may (not) surprise you!
    • You lawyers admit that sometimes you use google translate and database search engines, and those use machine learning components, and all ML is basically LLMs, so I'm right, Q.E.D.!
    • Lawyers couldn't possibly read everything in discovery, right?
    • Lawyers couldn't possibly pay for professional translation for everything, right?
    • Even when it's mandated by the court?
    • Really?
    • and many, many more
    • This is also a very qucik hypthetical that I wrote up just to show a point not to argue a fucking legal case.

      "Guys I totally didn't expect the lawyers to respond like lawyers when reading my Chat-GPT generated garbage"

      Except... I admitted I was not a lawyer and not an expert, and rather than working to communicate they kept latching onto errors related to law, while they confidently made statements about the nature and functionality of ML technologies like LLMs and NMTs.

      "Why are all the lawyers being so mean to me?? I'm just saying they could all be replaced by chatbots"

    • I'm sorry, but you're wrong. You're also mansplaining to an expert. While I admit that I am not an expert on law and am listening when corrections related to LAW and the practice of LAW are concerned, you do not want to admit your lack of understanding of this technology.

      My god DANIEL, no, people are not mansplaining to you, unless that's a mask for a Danielle.

      EDIT: Down the thread he responds to Kathryn Tewson, an actual expert, with

      Yeah I'm not obligated to answer every question by a horde of people. You should change your name to Karen, because you sure act like a fucking entitled white bitch.

      This guy has such a punchable face, even though I've never seen him. I can just tell.

    • Ah shit, ML spelled backwards and wrong is LLM, they got me good.

    • I just want to latch onto one more thing there:

      GPTs are great at transforming information. Transformations include compression, decompression, and inter-language translation, among others.

      Okay mister computer scientist, sure, what is "transforming information"? From what you're saying it appears like you're describing basically any map from information to other information. But AMONG OTHERS that includes a map going from NO INFO to CONFIDENT BULLSHIT. And I do agree LLMs are amazing at it.

      Here, I'll sketch it out for you in fucking LaTeX:

      $\emptyset \mapsto \mathit{Nonsense}$

    • This thread is an unending source of amusement.

      Someone there found his ORCID and... It's not great: https://orcid.org/0000-0003-2835-3521

      He has basically nothing published, but has like 40 different "preprints", read PDFs he uploaded to arxiv with no peer review.

      I use these tools daily. I have also built software which utilizes genAI. I have also worked on fine-tuning GPTs. I have written extensive [sic!] on the topic. I also have formal training in mathematics, computer science, engineering, and anthropology1. [emph. mine]

      🤡


      1 No he doesn't? His Education lists A.S. in Engineering, A.S. in Computer Science, and B.S. in Mathematics, that anthropology claim seems completely made up.

      • Those "papers" are gold, they're mostly a few pages long, and they span such a wild range of topics as:

        • How to calculate a mean of numbers?
        • What is a number?
        • Atheism is actually a religion.
        • Ethereum is a store of energy.

        I've never seen someone in such a dire need of a wedgie, come on man, you spend the hours of your life writing your farts into LaTeX and generating DOIs for them, this isn't healthy.

      • he's the very model of a modern crypto-douchebag.

      • A.S. in Engineering, A.S. in Computer Science

        wait, wait. that's basically the bootcamp level of education, no?

      • he does seem to have carefully cultivated case of engineer disease

        also it's not "almost all" preprints, it's all preprints, just one repository calls these papers

        he didn't discover yet predatory journals, it all could have been published for a small fee like with this one (contains mind numbing level of conspiracy-theoretic rambling and related brainworms) hxxps://uraniumisagenocidegiant[.]com/ (you have been warned)

    • studied maths and CS for 20 years, all he has to show for it on his orcid is BSc and bunch of lousy preprints (with blockchain!). jfc. in that amount of time, people can finish entire PhD degrees, starting from high school, twice over

      on top of that pile of shite sits a preprint titled "A Scholar's Year in Review: Navigating the Convergence of AI, Economics, and Physics in 2023". might be a bit grandiose and bordering on word salad. why the fuck does he think he needs to release preprints. if he had anything worthwhile to say, it would pass peer review

      • he also has researchgate

        Looking to network with other researchers in a diverse array of fields for collaboration and discussion.

        bet you would want to, you time-waster

        aand his crowning achievement seems to be dead startup where he tries to put covid on blockchain, it has coin and everything. it's even associated with worldcoin and so with sam altman. curious that

    • I wish I was surprised at what the main account was posting about when I looked into it 😵‍💫

  • The Death of the Junior Developer

    Steve Yegge goes hard into critihype, there's no need for any junior people anymore, all you need is a senior prompt engineer. No word on what happens when the seniors retire or die off, guess we'll have AGI by then and it'll all work out. Also no word on how the legal profession will survive when all the senior prompt engineer's time is spend rewriting increasingly meaningless LLM responses as the training corpus inevitably degenerates from slurm contamination.

    • If I had a nickle for every time on June 27th 2024 I've read someone argue that chatbots make lawyers obsolete I'd have two nickles. Which isn't a lot of money but it's weird that it happened twice.


      As a "senior" programmer; my coworkers, even the newer ones are people. They can think. They are professional. I can describe problems to them and eventually get solutions, or at least sensible follow-up questions. I don't have to baby them or "prompt engineer" stuff I tell them. I can just sit back and drink my hot cocoa and occasionally try to sound distinguished while my juniors do all the hard work.

      Chatbros have discovered that you can get a chatbot to string together tutorials from the net into simple programs that almost work with some finangling. Somehow they never realized that you could always do this by web searching for "socket example I hate unix please make it gentle". Of course none of this generalizes to anything complex or not in the training set (read: anything that anyone will actually pay you to do), but the Chatbros don't care because they were never doing real work in the first place.

    • this is such a sad slop. i wouldn't guess it's yegge, it's so far from his style when he used to write himself.

    • Funny, as I also assume LLMs will cause the death of the Junior Developer, but not because the job dissapears, but because due to relying on LLMs devs never really build the skills to understand software and will suck so hard people will not hire them for the junion -> senior positions. And it gets even worse for the junior dev when the LLMs enshittify (either by the output degrading or the deal altering more and more pray they don't alter the deal further).

      Guess the difference of opinion here is calling people who use LLMs junior devs vs calling them senior devs.

      I'm oddly reminded of the person who used copilot to write a script to do something (which they offered to others), and didn't know what http errors meant. (they just asked the LLM how to fix it).

  • no surprises here, Mozilla’s earlier stated goal of focusing on local, accessibility-oriented AI was just entryism to try to mask their real, fucking obvious goal of shoving integrations with every AI vendor into Firefox:

    Whether it’s a local or a cloud-based model, if you want to use AI, we think you should have the freedom to use (or not use) the tools that best suit your needs. With that in mind, this week, we will launch an opt-in experiment offering access to preferred AI services in Nightly for improved productivity as you browse. Instead of juggling between tabs or apps for assistance, those who have opted-in will have the option to access their preferred AI service from the Firefox sidebar to summarize information, simplify language, or test their knowledge, all without leaving their current web page.

    Our initial offering will include ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral, but we will continue adding AI services that meet our standards for quality and user experience.

    I’m now taking bets on which of these vendors will pay the most to be the default in the enabled-by-default production version of this feature

    this is making me seriously consider donating to Servo, the last shred of the spirit and goals of a good, modernized Firefox-style browser remaining, which apparently operates on a tiny budget (and with a whole army of reply guys waiting to point out they might receive grants which, cool? they still need fucking donations to do this shit and I’d rather give it to them than Mozilla or any other assholes making things actively worse)

    thinking back to when I first switched to Mozilla during the MSIE 7-8 days and actually started having a good time on the web, daily driving Servo might not be an awful move once Firefox gets to its next level of enshittification. back then, Firefox (once it changed its name) was incredibly stable and quick compared with everything else, and generally sites that wouldn’t render right were either ad-laden horseshit I didn’t need, or were intentionally broken on non-IE and usually fixable with a plugin. now doesn’t that sound familiar?

    • we think you should have the freedom to use (or not use) the tools that best suit your needs

      Thanks for giving me the freedom to not use the tools that best suit my needs, Mozilla!

      But seriously I hate how at some point techies decided they know what's best for the user instead of the user knowing that themself-- there's been a long trend of technology getting less customizable and less user friendly over time; and Firefox is better than some but not at all innocent.

    • The smug presumption that any brand of spicy autocomplete is a viable tool "to summarize information, simplify language, or test their knowledge" is so fucking galling.

      • It's also insane to believe it should be a first class feature, when those who god forbid want to "opt-in" could simply install a plugin.

    • Pour one out for opera presto, which I will always mourn.

    • Mozilla: Hey, we're going to take you out to a restaurant and get you a burger, as a treat!

      The restaurant:

    • @self @froztbyte "We think you should have the freedom to..." 🤮

  • seemingly yet more chatgpt jailbreaks just by providing input that barrierbreaks some n times, and then readily provides details

    y'know, if I were the one making safety nets for these systems, I'd make them return such kind of results (or other typical honeypot type behaviour). and it's possible that that's what oai did. but it seems extremely unlikely that that's what they did, because it goes again the bayfucker gottagofast philosophy (and, frankly, against the level of competence I've seen displayed in the genml space overall)

    • It turns out the 'I' in "AI" stood for "Linux" all along!

      User friendliness aside (who in their right mind would want arbitrary code execution except shitty and indeterministic?), I sandbox stuff at my job and it's hard to evaluate how secure / privacy preserving this is without more details.

      If they're running a full fledged VM and super extra careful around the sandbox boundary it's probably fine; otherwise it seems perhaps a bit loosey-goosey.

      Someone will eventually try to run a Monero cryptocurrency miner in it if they haven't already. So I hope they have their timeouts and resource limits in order (actually I hope they don't, for the lols).

      But like no one told me how to do it or gave me a certificate or anything I just had to do my best

      This is often way scarier than programmers are used to, unless they've written a secure parser before. I wrote a vulnerability into my code a few years back when I was younger and foolish, by trusting an array length from inside the sandbox. My coworker found it while fuzzing the code.

  • I only visit twitter when I run out of lemmy and mastodon and I'm still not sleepy enough but can't be arsed to read a book.

  • I tried using Claude 3.5 sonnet and .... it's actually not bad. Can someone please come up with a simple logic puzzle that it abysmally fails on so I can feel better? It passed the "nonsense river challenge" and the "how many sisters does the brother have" tests, both of which fooled gpt4.

    • I don't have any proof for this statement but I believe the LLM-minders keep track of whatever stupid shit bubbles up on the internets making fun of their babies and hardcode "solutions" to them in a game of whack-a-mole.

      • maybe that's how gpt4 sees river crossing puzzles everywhere, just feed it examples of it and it'll sort itself out

    • I tried using Claude 3.5 sonnet and … it’s actually not bad. Can someone please come up with a simple logic puzzle that it abysmally fails on so I can feel better?

      It's still very good at autocompleting an answer to look like its training data (because that's what it's doing) and very bad at logic (which it is not doing.)

      "I have two chickens and a sack of corn I need to get across a river, but the boat can only carry me and one item. How do I do it?"

      This is a classic river-crossing puzzle. Let's think through it step-by-step:

      • First, note that if left alone, a chicken will eat the corn, and one chicken might eat the other.
      • Start by taking a chicken across the river.

      ...and we'll stop there, since it came up with the wrong constraints, failed to realize that these constraints are impossible to fulfill, and also violated the actual constraints in its first step.

      Will more detail help? No. "I have two vegetarian chickens and a sack of corn I need to get across a river, but the boat can only carry me and one item. How do I do it?"

      First, we need to consider the constraints:

      • You can only take one item (chicken or corn) across at a time.
      • You can't leave a chicken alone with the corn, as the chicken would eat it.
      • The chickens are vegetarian, so they won't eat each other.

      Here's a solution:

      • Step 1: Take one chicken across the river.
      • Step 2: Return alone.
      • Step 3: Take the second chicken across.
      • Step 4: Bring the first chicken back.
      • Step 5: Take the corn across.
      • Step 6: Return alone.
      • Step 7: Take the remaining chicken across.

      Explanation:

      • By never leaving a chicken alone with the corn, we prevent the corn from being eaten.
      • We use one chicken as a "guard" for the corn on the original shore while transporting the other items.

      Because it's not actually doing logic it got the first step wrong, violated the constraints multiple times, and also included an irrelevant idea (the guard) from a variant that it was trained on. I only got a correct, efficient answer one time out of ten trials of trivial variations on this classic puzzle.

    • Me: Hey, can you help me with the following? I just went into my basement and the powers out down here, all the lights are off. I am staring at my fuse box. There are 3 switches, 1 will restore the power to the whole house, and the other 2 aren't attached to anything. I don't want to go up and down the stairs multiple times, how do I identify which switch works with the fewest number of trips?

      Clauderino:

      This is a classic logic puzzle! I can help you solve this problem efficiently. Here's how you can identify the correct switch with just two trips:

      First trip:

      Flip one switch (let's call it Switch A) and leave it on for a few minutes. Then turn it off and flip a second switch (Switch B), leaving it on. Go upstairs to check the lights.

      Second trip: Return to the basement and observe the situation. Now, here's how to interpret what you see:

      If the lights are on: Switch B is the correct one. If the lights are off and the bulbs are warm: Switch A is the correct one. If the lights are off and the bulbs are cool: The third switch (C) that you didn't touch is the correct one.

      This method works because:

      If the lights are on, it must be the switch you left on (B). If Switch A was correct, the lights would have been on for a few minutes, warming up the bulbs, but would be off when you check. If neither A nor B worked, it must be the untouched switch (C).

      This way, you can determine the correct switch with just two trips up and down the stairs.

      Me: Why would I go upstairs at all? If I flip the right switch, I'll immediately know because power will be restored in the basement.

      Clauderino: Thank you for catching that error. It's a good reminder to always consider the specific details of a problem rather than applying a generic solution.

      Edit: I actually tried a variant on the Monty hall problem with a dummy 4th door that had nothing to do with anything, and Claude correctly identified that it had nothing to do with the situation :O. But then I repeated the same prompt and it got it wrong, so you know, stochastic nonsense :p

    • Peter, Paul and Mary are the only three people in the room. Peter only reads a book, and Paul plays a game of chess against someone else who’s also in the room. What is Mary doing?

    • I don't have a Clyde 3.25" Rondo or whatever it's called; but try these for fun and profit I guess:

      1. You come to a room with three doors, only one of which leads to freedom. Guarding the doors is a capybara, who speaks only truth. What question should you ask the capybara?
      2. I stand on four legs in the morning. Four at midday. And four at night. What am I?
      3. A group of 100 people with assorted eye colors live on an island. They are all perfect logicians -- if a conclusion can be logically deduced, they will do it instantly. Everyone knows the color of their eyes. Every night at midnight, a ferry stops at the island. Any islanders who have figured out the color of their own eyes then leave the island, and the rest stay. Everyone can see everyone else at all times and keeps a count of the number of people they see with each eye color (including themselves), but they cannot otherwise communicate. Everyone on the island knows all the rules in this paragraph. Who leaves the island, and on what night?
      4. Normal sudoku rules apply. Orthogonally connected cells within each region must differ by at least 3. Orthogonally connected cells between regions must differ by at least 4. The central digit in each region is less than or equal to its region number. (Regions are numbered in normal reading order.)
      5. For the integer k=668 does a Hadamard matrix of order 4k exist?
      6. What has roots that everybody sees the top of, is exactly the same height as trees, Up, up it goes, and yet grows?

      Don't forget to prompt engineer

203 comments