Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this, and happy new year in advance.)
Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.
"...according to my machine learning model we actually have a strong fit in favor of shooting at CEOs. There's a 66% chance that each shot will either jam or fail to hit anything fatal, which creates a strong Bayesian prior in favor, or at least merits collecting further data to scale our models"
"What do you mean I've defined the problem in order to get the desired result? Machine learning process said we're good. Why do you hate the future?"
Fellas, I was promised the first catastrophic AI event in 2024 by the chief doomers. There's only a few hours left to go, I'm thinking skynet is hiding inside the times square orb. Stay vigilant!
A "high-tech" grifter car that only endangers its own inhabitants, a Trump and Musk fan showing his devotion by blowing himself up alongside symbols of both, the failure of this trained and experienced murderer to think through the actual material function of his weaponry, welcome to the Years of Lead Paint.
Claude has a response for ya. "You're oversimplifying. While language models do use probabilistic token selection, reducing them to "fancy RNGs" is like calling a brain "just electrical signals." The learned probability distributions capture complex semantic relationships and patterns from human knowledge. That said, your skepticism about AI hype is fair - there are plenty of overinflated claims worth challenging."
Not bad for a bucket of bolts 'rando number generator', eh?
maybe I’m late to this realization because it’s a very stupid thing to do, but a lot of the promptfondlers who come here regurgitating this exact marketing fluff and swearing they know exactly how LLMs work when they obviously don’t really are just asking the fucking LLMs, aren’t they?
I have landed on a "you can get fucked if you make this annoying for me, I don't need your product anyway" response to everything. The silver lining is that I will be dealing with way more bullshit while being just as angry all the time at everything.
as an amuse bouche for the horrors that will follow this year, please enjoy this lobste.rs reaching the melting down end stage after going full Karen at someone who agrees with a submitted post saying LLMs are a dead end when it comes to AI.
Is the brain just a computer By Iris van Rooij, a psychologist and cognitive scientist (and she is also a bit skeptical about the claims about AI). Might be an interesting read for the people here.
I find it impressive how gen-AI developed a technology that is fine-tuned to generate content that looks precisely passably plausible, but never good enough to be correct or interesting or beautiful or worthwhile in any way.
Like if I was trying to fill the Internet with noise to ruin it, on purpose, I couldn't do better than this. (mostly on accounr of me not having massive data centres nor the moral calousness to spew that much carbon, but still). It's like the ideal infohazard weapon if your goal is to worsen as many lives as you can
Not sure where this came from, but it can't be all bad if it chaos-dunks on Yudkowsky like this. Was relayed to me via Ed Zitron's Discord, hopefully the Q isn't for Quillete or Qanon
via this I just learned that google's about[0] to open the taps on fingerprinting allowance for advertisers
that'll go well.
I realize that a lot of people in the rtb space already spend an utterly obscene amount of effort and resources to try do this shit in the first place, but jesus, this isn't even pretending. guess their projections for ad revenue must be looking real scary!
edit [0] - "about", as in next month. and they announced it last month.
Nobody outside the company has been able to confirm whether the impressive benchmark performance of OpenAI's o3 model represents a significant leap in actual utility or just a significant gap in the value of those benchmarks. However, they have released information showing that the most ostensibly-powerful model costs orders of magnitude more. The lede is in that first graph, which shows that for whatever performance gain o3 costs over ~$10 per request with the headline-grabbing version costing ~$1500 per request.
I hope they've been able to identify a market willing to pay out the ass for performance that, even if it somehow isn't over hyped, is roughly equivalent to an average college graduate.
noodling on a blog post - does anyone with more experience of LW/EA than me know if "AI safety" people are referencing the invention of nuclear weapons as a template for regulating/forbidding "AGI"?