Skip Navigation

Posts
10
Comments
599
Joined
2 yr. ago

  • To be scrupulously fair there were multiple banners on LW announcing that prices would rise unless locked in early. NFC what they started at though, maybe $400?

    Personally you would have to pay me $5,500 + room and board to force me to attend.

  • Additional "points" for the commenting system using local times in the user's browser, thereby timestamping further unfunny AFJ at 2 Apr in my timezone.

  • LOL from the comments

    [...] "people having unreasonably high expectations for epistemics in published work" is definitely a cost of dealing with EAs!

  • Their dumb contraction should have doomed the project from the start...

  • I knew the HN reaction to Marine Le Pen being banned from being elected to public office would be unhinged, but weirdly I did not have "good, she's a woman and weak so now a strong man can lead RN to glorious victory" on my bingo card. More fool me, misogyny is always on the card.

    https://news.ycombinator.com/item?id=43534140

  • For some reason it's on brand for HN to have a discussion of different dash widths stick on the front page more than 24h

    https://news.ycombinator.com/item?id=43497719

    Extra spice and relevance for the observation that GenAI text apparently has a lot of em-dashes in it, so add that to the frequency of the word "delve".

  • I decided to remove that comment because of the risk of psychic damage.

  • From the comments

    But I'm wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.

    (https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong?commentId=xnfHpn9ryjKqG8WKA)

    No biggie, just decide one of the largest open questions in ethics and use that to moderate.

    (It would be funny if unaligned AIs take advantage of this to plot humanity's downfall on LW, surrounded by flustered rats going all "techcnially they're not breaking the rules". Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)

  • Annoying nerd annoyed annoying nerd website doesn't like his annoying posts:

    https://news.ycombinator.com/item?id=43489058

    (translation: John Gruber is mad HN doesn't upvote his carefully worded Apple tonguebaths)

    JWZ: take the win, man

  • As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.

    This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.

  • Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.

  • LW discourages LLM content, unless the LLM is AGI:

    https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

    As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

    Never change LW, never change.