Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 23 February 2025

awful.systems /post/3491424

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

133 comments
  • Interesting slides: Peter Gutmann - Why Quantum Cryptanalysis is Bollocks

    Since quantum computers are far outside my expertise, I didn't realize how far-fetched it currently is to factor large numbers with quantum computers. I already knew it's not near-future stuff for practical attacks on e.g. real-world RSA keys, but I didn't know it's still that theoretical. (Although of course I lack the knowledge to assess whether that presentation is correct in its claims.)

    But also, while reading it, I kept thinking how many of the broader points it makes also apply to the AI hype... (for example, the unfounded belief that game-changing breakthroughs will happen soon).

    • It's been frustrating to watch Gutmann slowly slide. He hasn't slid that far yet, I suppose. Don't discount his voice, but don't let him be the only resource for you to learn about quantum computing; fundamentally, post-quantum concerns are a sort of hard read in one direction, and Gutmann has decided to try a hard read in the opposite direction.

      Page 19, complaining about lattice-based algorithms, is hypocritical; lattice-based approaches are roughly as well-studied as classical cryptography (Feistel networks, RSA) and elliptic curves. Yes, we haven't proven that lattice-based algorithms have the properties that we want, but we haven't proven them for classical circuits or over elliptic curves, either, and we nonetheless use those today for TLS and SSH.

      Pages 28 and 29 are outright science denial and anti-intellectualism. By quoting Woit and Hossenfelder — who are sneerable in their own right for writing multiple anti-science books each — he is choosing anti-maths allies, which is not going to work for a subfield of maths like computer science or cryptography. In particular, p28 lies to the reader with a doubly-bogus analogy, claiming that both string theory and quantum computing are non-falsifiable and draw money away from other research. This sort of closing argument makes me doubt the entire premise.

      • Thanks for adding the extra context! As I said, I don't have the necessary level of knowledge in physics (and also in cryptography) to have an informed opinion on these matters, so this is helpful. (I've wanted to get deeper in both topics for a long time, but life and everything has so far not allowed for it.)

        About your last paragraph, do you by chance have any interesting links on "criticism of the criticism of string theory"? I wonder, because I have heard the argument "string theory is non-falsifiable and weird, but it's pushed over competing theories by entrenched people" several times already over the years. Now I wonder, is that actually a serious position or just conspiracy/crank stuff?

    • Comparing quantum computing to time machines or faster-than-light travel is unfair. In order for the latter to exist, our understanding of physics would have to be wrong in a major way. Quantum computing presumes that our understanding of physics is correct. Making it work is "only" an engineering problem, in the sense that Newton's laws say that a rocket can reach the Moon, so the Apollo program was "only" a engineering project. But breaking any ciphers with it is a long way off.

      • Comparing quantum computing to time machines or faster-than-light travel is unfair.

        I didn't interpret the slides as an attack on quantum computing per se, but rather an attack on over-enthusiastic assertions of its near-future implications. If the likelihood of near-future QC breaking real-world cryptography is so extremely low, it's IMO okay to make a point by comparing it to things which are (probably) impossible. It's an exaggeration of course, and as you point out the analogy isn't correct in that way, but I still think it makes a good point.

        What I find insightful about the comparison is that it puts the finger on a particular brain worm of the tech world: the unshakeable belief that every technical development will grow exponentially in its capabilities. So as soon as the most basic version of something is possible, it is believed that the most advanced forms of it will follow soon after. I think this belief was created because it's what actually happened with semiconductors, and of course the bold (in its day) prediction that was Moore's law, and then later again, the growth of the internet.

        And now this thinking is applied to everything all the time, including quantum computers (and, as I pointed to in my earlier post, AI), driven by hype, by FOMO, by the fear of "this time I don't want to be among those who didn't recognize it early". But there is no inherent reason why a development should necessarily follow such a trajectory. That doesn't mean of course that it's impossible or won't get there eventually, just that it may take much more time.

        So in that line of thought, I think it's ok to say "hey look everyone, we have very real actual problems in cryptography that need solving right now, and on the other hand here's the actual state and development of QC which you're all worrying about, but that stuff is so far away you might just as well worry about time machines, so please let's focus more on the actual problems of today." (that's at least how I interpret the presentation).

      • heh yup. I think the most recent one (somewhere in the last year) was something like 12-bit rsa? stupendously far off from being a meaningful thing

        I’ll readily admit to being a cryptography mutt and a qc know-barely-anything, and even from my limited understanding the assessment of where people are at (with how many qubits they’ve managed to achieve in practical systems) everything is hilariously woefully far off ito attacks

        that doesn’t entirely invalidate pqc and such (since the notion there is not merely defending against today/soon but also a significant timeline)

        one thing I am curious about (and which you might’ve seen or be able to talk about, blake): is there any kind of known correlation between qubits and viable attacks? I realize part of this quite strongly depends on the attack method as well, but off the cuff I have a guess (“intuition” is probably the wrong word) that it probably scales some weird way (as opposed to linear/log/exp)

  • I've been listening to faster and worse (see https://awful.systems/comment/6216748 ) and I like it so I wanted to give it ups.

    (I think this and the memory palace are the only micro podcasts I've listened to. idk why it isn't a more common format)

    • thanks! It might be uncommon because it's a real pain in the ass to keep it short. Every time I make one I stress about how easily my point can be misunderstood because there are so few details. Good way to practice the art of moving on

  • https://mastodon.gamedev.place/@lritter/114001505488538547

    master: welcome to my Smart Home

    student: wow. how is the light controlled?

    master: with this on-off switch

    student: i don't see a motor to close the blinds

    master: there is none

    student: where is the server located?

    master: it is not needed

    student: excuse me but what is "Smart" about all of this?

    master: everything.

    in this moment, the student was enlightened

    • Deep Research is the AI slop of academia — low-quality research-slop built for people that don't really care about quality or substance, and it’s not immediately obvious who it’s for.

      it's weird that Ed stops there, since answer almost writes itself. ludic had a bit about how in companies bigger than three guys in a shed, people who sign software contracts don't use that software in any normal way;

      The idea of going into something knowing about it well enough to make sure the researcher didn't fuck something up is kind of counter to the point of research itself.

      conversely, if you have no idea what are you doing, you won't be able to tell if machine generated noise is in any way relevant or true

      The whole point of hiring a researcher is that you can rely on their research, that they're doing work for you that would otherwise take you hours.

      but but, this lying machine can output something in minutes so this bullshit generator obviously makes human researchers obsolete. this is not for academia because it's utterly unsuitable and google scholar beats it badly anyway; this is not for wide adoption because it's nowhere near free tier; this is for idea guys who have enough money to shell out $whatever monthly subscription and prefer to set a couple hundred of dollars on fire instead of hiring a researcher/scientist/contractor. especially keeping in mind that contractor might tell them something they don't want to hear, but this lmgtfy x lying box (but worse, because it pulls lots of seo spam) won't

      OpenAI's next big thing is the ability to generate a report that you would likely not be able to use in any meaningful way anywhere, because while it can browse the web and find things and write a report, it sources things based on what it thinks can confirm its arguments rather than making sure the source material is valid or respectable.

      e: this is also insidious and potent attack surface marketing opportunity against clueless monied people who trust these slop machines for some reason. and it might be exploitable by tuning seo just right

  • found in the wild, The Tech Barons have a blueprint drawn in crayon

    speaking of shillrinivan, anyone heard anything more about cult school after the news that no-one like bryan's shitty food packs?

    • wait that's it? he wants to "replace" states with (vr) groupchats on blockchain? it can't be this stupid, you must be explaining this wrong (i know, i know, saying it's just that makes it look way more sane than it is)

      The basic problem here is that Balaji is remarkably incurious about what states actually do and what they are for.

      libertarians are like house cats etc etc

      In practice, it's a formula for letting all the wealthy elites within your territorial borders opt out of paying taxes and obeying laws. And he expects governments will be just fine with this because… innovation.

      this is some sovereign citizen type shit

    • Having read the whole book, I am now convinced that this omission is not because Srinivasan has a secret plan that the public would object to. The omission, rather, is because Balaji just isn't bright enough to notice.

      That's basically the entire problem in a nutshell. We've seen what people will fill that void with and it's "okay but I have power here now and I dare you to tell me I don't" and you know who happens to have lots of power? That's right, it's Balaji's billionaire bros! But this isn't a sinister plan to take over society - that would at least entail some amount of doing what states are for.

      Ed:

      "Who is really powerful? The billionaire philanthropist, or the journalist who attacks him over his tweets?"

      I'm not going to bother looking up which essay or what terrible point it was in service to, but Scooter Skeeter of all people made a much better version of this argument by acknowledging that the other axis of power wasn't "can make someone feel bad through mean tweets" but was instead "can inflict grievous personal violence on the aged billionaires who pay them for protection". I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.

      • the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.

        this is a possibility lots of the prepper ultra rich are concerned with, yet I don't recall that I've ever heard the tech scummies mention it. they don't realize that their fantasized outcome is essentially identical to the prepper societal breakdown, because they don't think of it primarily as a collapse.

        more generally, they seem to consider every event in the most narcissistic terms: outcomes are either extensions of their power and luxury to ever more limitless forms or vicious and unjustified leash jerking. there's a comedy of the idle rich aspect to the complacency and laziness of their dream making. imagine a boot stamping on a face, forever, between rounds at the 9th hole

      • I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.

        i think it'll turn out muchhh less dramatic. look up cryptobros, how many of them died at all, let alone this way? i only recall one ruja ignatova, bulgarian scammer whose disapperance might be connected to local mafia. but everyone else? mcaffee committed suicide, but that might be after he did his brain's own weight in bath salts. for some of them their motherfuckery caught up with them and are in prison (sbf, do kwon) but most of them walk freely and probably don't want to attract too much attention. what might happen, i guess, is that some of them will cheat one another out of money, status, influence, what have you, and the scammed ones will just slide into irrelevance. you know, to get a normal job, among normal people, and not raise suspicion

      • That’s basically the entire problem in a nutshell.

        I think a lot of these people are cunning, aka good at somewhat sociopathic short term plans and thinking, and they confuse this ability (and they survivor biassed success) for being good at actual planning (or just thinking that planning is worthless, after all move fast and break things (and never think about what you just said)). You don't have to actually have good plans if people think you have charisma/a magical money making ability (which needs more and more rigging of the casino to get money on the lot of risky bets to hope one big win pays for it all machine).

        Doesn't help that some of them seem to either be on a lot of drugs, or have undiagnosed adhd. Unrelated, Musk wants to go into Fort Knox all of a sudden, because he saw a post on twitter which has convinced him 'they' stole the gold (my point here is that there is no way he was thinking about Knox at all before he randomly came across the tweet, the plan is crayons).

133 comments