froztbyte @ froztbyte @awful.systems Posts 16Comments 1,425Joined 2 yr. ago
the wildest bit is that one could literally just … go do the thing. like you could grab the sdk and run through the tutorial and actually have babby’s first gpu program in not too long at all[0], with all the lovely little bits of knowledge that entails
but nah, easier to just make some nonsense up out of thirdhand conversations misheard out of a gamer discord talking about a news post of a journalist misunderstanding a PR statement, and then confidently spout that synthesis
[0] - I’m eliding “make the cuda toolchain run” for argument of simplicity. could just rent a box that has it, for instance
I want to say “plato’s cave of media awareness” but that’s perhaps too much a mouthful
is this the They I keep hearing about at parties? the ones behind takeout burgers getting smaller? the bastards!
yep, clueless. can't tell a register apart from a soprano. and allocs? the memory's right there in the machine, it has it already! why does it need an alloc!
fuckin' dipshit
next time you want to do a stupid driveby, pick somewhere else
I do sorta get the idea that this is (one of the reasons) exactly why 'ole felon is trying to get his hand on all the funding faucets
foot-loom powered ml
for anyone reading this comment hoping for an actual eli5, the "technical POV" here is nonsense bullshit. you don't program GPUs with assembly.
the rest of the comment is the poster filling in bad comparisons with worse details
aww, look at the little collaborators trying to pretend they both got duped instead of both having been active and enthusiastic enablers
you are not tall enough for this ride, go try some candyfloss and walking the hall of mirrors instead
the point is that your eli5 is unfounded rumour hearsay bullshit (and thus it’s entirely pointless to spread it), then when giving you a relatively gentle indication of that you decided to cosplay an ostrich
pro-tip: if it ain’t something you actually understand something about, probably best to avoid uncritically amplifying shit about it
mine isn’t a “USA v China: Jelly Wrestling Deluxe” comment and you’re not really understanding the point
I've been trying to avoid inundating people with US politics, but it's extremely bad. Like constitutional crisis, rise of techno-fascism, dismantling of the administrative state, transgender extermination, put career roadblocks in front of minorities bad.
yep. haven’t been posting about it here because not sure where here we’d put it (while a lot of it is well within the orbit of regular content and posters) and it’s not quite entirely anything I can do anything about but offer words of comfort and keeping watch on the nasty shit, but been speaking a lot with friends in places (signal generally, or some other spaces we actually control (i.e. not discord, etc))
I feel moderately confident that at least for a bit of the foreseeable future we’ll be okay this side of the world, but I also know enough history and context to know how vacuous that is by itself. these fuckers won’t stop.
I also wish I could just make people understand that none of this is by mistake, none of this is these fuckers just finding some shit they disagree with under the seat cushions. I wish I could make them understand the depth and extent of planning and preparation that went into this, the sheer commitment behind it all. but too often such concerns would all be received as this toot put it
there’s so much more I could say but I guess I’ll leave it there for now
I saw (via Stross) a mention that passport issuance was already starting to hit weirdness too
e: this one
sidebar: I definitely wouldn’t be surprised if it comes to this overall being a case of “a shop optimised by tuning, and then it suddenly turns out the entire industry has never tried to tune a thing ever”
because why try hard when the money taps are open and flowing free? velocity over everything! this is the bayfucker way.
okay so that post’s core supposition (“using ptx instead of cuda”) is just fucking wrong fucking weird and I’m not going to spend time on it, but it links to this tweet which has this:
DeepSeek customized parts of the GPU’s core computational units, called SMs (Streaming Multiprocessors), to suit their needs. Out of 132 SMs, they allocated 20 exclusively for server-to-server communication tasks instead of computational tasks
this still reads more like simply tuning allocation than outright scheduler and execution control (which your post alluded to)
[x] doubt
e: original wording because cuda still uses ptx anyway, whereas this post looks like it’s saying “they steered ptx directly”. at first I read the tweet more like “asm vs python” but it doesn’t appear to be what that part meant to convey. still doubting the core hypothesis tho