Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
帖子
0
评论
613
加入于
11 mo. ago

  • I'm pretty sure you could download a decent markov chain generator onto a TI-89 and do basically the same thing with a more in-class appropriate tool, but speaking as someone with dogshit handwriting I'm so glad to have graduated before this was a concern. Godspeed, my friend.

  • There's a whole lot of ontological confusion going on here, and I want to make sure I'm not going too far in the opposite direction. Information, in the mathematical Shannon-ian sense, basically refers specifically to identifying one out of a possible set of values. In that sense, no underlying physical state could be said to hold "more" information than any other, right? Like, depending on the encoding a given amount of information can use a different amount of space on a channel (TRUE vs T vs 1), but just changing which arrangement of bits is currently in use doesn't increase or decrease the total amount of information in the channel. I'm sure there's some interesting physics to be done about our ability to meaningfully read or write to a given amount of space (something something quantum something something) but the idea of information somehow existing independently rather than being projected into the probability distribution of states in the underlying physical world is basically trying to find the physical properties of the Platonic forms or find the mass of the human soul.

  • Honestly I'm more surprised to learn that this is deriving itself from actual insights being misunderstood or misapplied rather than being whole-cloth bullshit. Although the landauer principle seems kind of self-evident to me? Like, storing a bit of data is more dependent on the fact that an action was performed than on the actual state being manipulated, so of course whether we're talking about voltages or magnets or whatever other mechanism is responsible for maintaining that state the initial "write" requires some kind of action and therefore expenditure of energy.

    Then again I had never heard of the concept before today and I'm almost certainly getting way out of my depth and missing a lot of background.

  • Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.

  • I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn't have room to complain? Not that I'd recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.

  • Oh the author here is absolutely a piece of work.

    Here's an interview where he's talking about the biblical support for all of this and the ancient Greek origins of blah blah blah.

    I can't definitely predict this guy's career trajectory, but one of those cults where they have to wear togas is not out of the question.

  • You're missing the most obvious implication, though. If it's all simulated or there's a Cartesian demon afflicting me then none of you have any moral weight. Even more importantly if we assume that the SH is true then it means I'm smarter than you because I thought of it first (neener neener).

  • How sneerable is the entire "infodynamics" field? Because it seems like it should be pretty sneerable. The first referenced paper on the "second law of infodynamics" seems to indicate that information has some kind of concrete energy which brings to mind that experiment where they tried to weigh someone as they died to identify the mass of the human soul. Also it feels like a gross misunderstanding to describe a physical system as gaining or losing information in the Shannon framework since unless the total size of the possibility space is changing there's not a change in total information. Like, all strings of 100 characters have the same level of information even though only a very few actually mean anything in a given language. I'm not sure it makes sense to talk about the amount of information in a system increasing or decreasing naturally outside of data loss in transmission? IDK I'm way out of my depth here but it smells like BS and the limited pool of citations doesn't build confidence.

  • I was watching an old Day9 stream today and this story is bouncing off of some comments he made about the importance of degenerate players to competitive design. Like, this is a pretty dumb outcome of regulation, but at the same time if you try to define regulations to create an exception for what "everybody knows" is in something suddenly that process gets taken advantage of by self-interested manufacturers who will figure out how to convincingly argue that "everybody knows" their generic shampoo contains peanuts and shellfish or whatever. And in the context of this kind of regulation that degenerate play will get people killed.

  • On a related note, shout-out to the banner image of a neatly spaced rectangular grid of trees, the one part of the book that if I remember right Scooter did actually read and sort of understand, even if he was unable to generalize beyond early modern forestry.

    Ironically that review was where I first encountered the work of the late James Scott.

  • There's also a common argument that the problem in AV accidents is primarily the other human drivers, which is a classic case of "if everyone just immediately changed over to doing things this way it would solve the problem!"

  • Honestly the most surprising and interesting part of that episode of Power(projection)Points with Perun was the idea of simple land mines as autonomous lethal systems.

    Once again, the concept isn't as new as they want you to think, moral and regulatory frameworks already exist, and the biggest contribution of the AI component is doing more complicated things than existing mechanisms but doing them badly.