Skip Navigation
It must be a silent R
  • It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.

  • Introducing Zed AI
  • I was sceptical at first too, but the way they're not just adding another chatbot, that it is basically a LLM request composing tool is really interesting. Its not trying to hide what an LLM is behind some obscure personality interface, its a text processing tool foremost. I like it!

  • Can AI even be open source? It's complicated
  • Personally, if I can't go from human readable data to a complete model then I don't consider it open source. I understand these companies want to keep the magic sauce thats printing them money but all the open source marketing is inherently dishonest. They should be clear that the architecture and the product they are selling is separate, much like proprietary software just has all the open source software they used as a footnote in their about screens.

  • Mcafee accidentally made users call the devs of SQLite and complain.
  • The way I understand the users didn't necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.

  • "prompt engineering"
  • I do think we're machines, I said so previously, I don't think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don't see why it needs to be more, but again, all personal opinion.

  • "prompt engineering"
  • I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don't know what you actually mean.

  • "prompt engineering"
  • Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense... That's a whole other kettle of fish). Id consider all "thinking things" as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it's future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

    I'm still working on this definition, again just a personal viewpoint.

  • Scientists warn of AI collapse - Converging Results (Video)
  • Most of the largest datasets are kind of garbage because of this. I've had this idea to run the data through the network every epoch and evict samples that are too similar to the output for the next epoch but never tried it. Probably someone smarter than me already tried that and it didn't work. I just feel like there's some mathematical way around this we aren't seeing. Humans are great at filtering the cruft so there must be some indicators there.

  • [xkcd] A Bunch of Rocks (17 Nov 2008)
  • I think I see where you're coming from. The computer in the comic is a Rule 110 automata, known to be Turing complete. It can perform complex calculations, allegedly.

    I suppose it can get a bit philosophical whether an incomplete time instant is even visible from the inside of a simulation, because nothing moves after a single pass until the full frame is complete, hence limiting perception.

  • [xkcd] A Bunch of Rocks (17 Nov 2008)
  • Unless you mean continuity as in non discrete physics, which is fair play for this specific computer but then there is the Planck length to consider.(edit: I am aware that discrete vs continuous is a whole holy war on its own)

  • [xkcd] A Bunch of Rocks (17 Nov 2008)
  • He bases the next row of stones on the previous one, changing them by a consistent rule? Its an unorthodox computer with infinite memory. Why does that not count as a simulation? I'm not following

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)VC
    vcmj @programming.dev
    Posts 0
    Comments 44