Skip Navigation
lately it's been feeling like that
  • Wait until it starts feeling like revelation deja vu.

    Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.

    • 2 Tim 2:17-18
  • Why are people seemingly against AI chatbots aiding in writing code?
  • I'm a seasoned dev and I was at a launch event when an edge case failure reared its head.

    In less than a half an hour after pulling out my laptop to fix it myself, I'd used Cursor + Claude 3.5 Sonnet to:

    1. Automatically add logging statements to help identify where the issue was occurring
    2. Told it the issue once identified and had it update with a fix
    3. Had it remove the logging statements, and pushed the update

    I never typed a single line of code and never left the chat box.

    My job is increasingly becoming Henry Ford drawing the 'X' and not sitting on the assembly line, and I'm all for it.

    And this would only have been possible in just the last few months.

    We're already well past the scaffolding stage. That's old news.

    Developing has never been easier or more plain old fun, and it's getting better literally by the week.

    Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Actually, they are hiding the full CoT sequence outside of the demos.

    What you are seeing there is a summary, but because the actual process is hidden it's not possible to see what actually transpired.

    People are very not happy about this aspect of the situation.

    It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

    There's a lot of things to be focused on in that image, and "hur dur the stochastic model can't count letters in this cherry picked example" is the least among them.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Yep:

    https://openai.com/index/learning-to-reason-with-llms/

    First interactive section. Make sure to click "show chain of thought."

    The cipher one is particularly interesting, as it's intentionally difficult for the model.

    The tokenizer is famously bad at two letter counts, which is why previous models can't count the number of rs in strawberry.

    So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

    Will help clarify how it's going about solving something like the example I posted earlier behind the scenes.

  • OpenAI releases o1, its first model with ‘reasoning’ abilities
  • I'd recommend everyone saying "it can't understand anything and can't think" to look at this example:

    https://x.com/flowersslop/status/1834349905692824017

    Try to solve it after seeing only the first image before you open the second and see o1's response.

    Let me know if you got it before seeing the actual answer.

  • Jet Fuel
  • I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

    It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

    And then it said "and your theory doesn't address the thermite residue" going on to reiterate their wild theory.

    Was very much a "don't name your gods" moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

    As long as they only focused on generic memes of "do your own research" and "you aren't being told the truth" they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.

  • The $700 PS5 Pro doesn’t come with a disc drive
  • They got off to a great start with the PS5, but as their lead grew over their only real direct competitor, they became a good example of the problems with monopolies all over again.

    This is straight up back to PS3 launch all over again, as if they learned nothing.

    Right on the tail end of a horribly mismanaged PSVR 2 launch.

    We still barely have any current gen only games, and a $700 price point is insane for such a small library to actually make use of it.

  • AI worse than humans in every way at summarising information, government trial finds
  • Meanwhile, here's an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn't exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It's also worth noting that Claude 3 Opus doesn't have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).

  • AI worse than humans in every way at summarising information, government trial finds
  • This is pretty much every study right now as things accelerate. Even just six months can be a dramatic difference in capabilities.

    For example, Meta's 3-405B has one of the leading situational awarenesses of current models, but isn't present at all to the same degree in 2-70B or even 3-70B.

  • Deep thoughts.
  • Lucretius in De Rerum Natura in 50 BCE seemed to have a few that were just a bit ahead of everyone else, owed to the Greek philosopher Epicurus.

    Survival of the fittest (book 5):

    "In the beginning, there were many freaks. Earth undertook Experiments - bizarrely put together, weird of look Hermaphrodites, partaking of both sexes, but neither; some Bereft of feet, or orphaned of their hands, and others dumb, Being devoid of mouth; and others yet, with no eyes, blind. Some had their limbs stuck to the body, tightly in a bind, And couldn't do anything, or move, and so could not evade Harm, or forage for bare necessities. And the Earth made Other kinds of monsters too, but in vain, since with each, Nature frowned upon their growth; they were not able to reach The flowering of adulthood, nor find food on which to feed, Nor be joined in the act of Venus.

    For all creatures need Many different things, we realize, to multiply And to forge out the links of generations: a supply Of food, first, and a means for the engendering seed to flow Throughout the body and out of the lax limbs; and also so The female and the male can mate, a means they can employ In order to impart and to receive their mutual joy.

    Then, many kinds of creatures must have vanished with no trace Because they could not reproduce or hammer out their race. For any beast you look upon that drinks life-giving air, Has either wits, or bravery, or fleetness of foot to spare, Ensuring its survival from its genesis to now."

    Trait inheritance from both parents that could skip generations (book 4):

    "Sometimes children take after their grandparents instead, Or great-grandparents, bringing back the features of the dead. This is since parents carry elemental seeds inside – Many and various, mingled many ways – their bodies hide Seeds that are handed, parent to child, all down the family tree. Venus draws features from these out of her shifting lottery – Bringing back an ancestor’s look or voice or hair. Indeed These characteristics are just as much the result of certain seed As are our faces, limbs and bodies. Females can arise From the paternal seed, just as the male offspring, likewise, Can be created from the mother’s flesh. For to comprise A child requires a doubled seed – from father and from mother. And if the child resembles one more closely than the other, That parent gave the greater share – which you can plainly see Whichever gender – male or female – that the child may be."

    Objects of different weights will fall at the same rate in a vacuum (book 2):

    “Whatever falls through water or thin air, the rate Of speed at which it falls must be related to its weight, Because the substance of water and the nature of thin air Do not resist all objects equally, but give way faster To heavier objects, overcome, while on the other hand Empty void cannot at any part or time withstand Any object, but it must continually heed Its nature and give way, so all things fall at equal speed, Even though of differing weights, through the still void.”

    Often I see people dismiss the things the Epicureans got right with an appeal to their lack of the scientific method, which has always seemed a bit backwards to me. In hindsight, they nailed so many huge topics that didn't end up emerging again for millennia that it was surely not mere chance, and the fact that they successfully hit so many nails on the head without the hammer we use today indicates (at least to me) that there's value to looking closer at their methodology.

  • Man who said he'd 'pluck out' VP Harris' eyes was baffled when feds appeared at door: Complaint
  • It was good, but it did feel like the narrative around the boss could have been tied to the rest of the world a bit better.

    The zone leading up to it was one of my favorite in the DLC with the ambience build up, but I expected more after the fight.

  • Elden Ring is "the limit" for From Software project scale, says Miyazaki - multiple, "smaller" games may be the "next stage"
  • The DLC is really the right balance for FromSoft.

    The zones in the base game are slightly too big.

    In the DLC, it's still open world and extremely flexible in how you explore it, but there's less wasted space.

    It's very tightly knit and the pacing is better as a result.

    It's like Elden Ring was watching masters of their craft cut their teeth on something new, and then the DLC was them applying everything they learned in that process.

    Can't wait for their next game in that same vein (especially not held back by last gen consoles).

  • Mapping the Mind of a Large Language Model
    www.anthropic.com Mapping the Mind of a Large Language Model

    We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

    Mapping the Mind of a Large Language Model

    I often see a lot of people with outdated understanding of modern LLMs.

    This is probably the best interpretability research to date, by the leading interpretability research team.

    It's worth a read if you want a peek behind the curtain on modern models.

    21
    Examples of artists using OpenAI's Sora (generative video) to make short content
    openai.com Sora: First Impressions

    We have gained valuable feedback from the creative community, helping us to improve our model.

    Sora: First Impressions
    6
    New Theory Suggests Chatbots Can Understand Text
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

    Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

    > New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

    > This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

    > “[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

    97
    New Theory Suggests Chatbots Can Understand Text
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

    2
    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues
    www.forbes.com Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...

    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

    8
    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender
    www.reuters.com Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender

    The Israeli military said it was carrying out a raid on Wednesday against Palestinian Hamas militants in Al Shifa Hospital, the Gaza Strip's biggest hospital, and urged them all to surrender.

    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender
    92
    Machine-learning system based on light could yield more powerful, efficient large language models
    news.mit.edu Machine-learning system based on light could yield more powerful, efficient large language models

    An MIT machine-learning system demonstrates greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density compared with current systems.

    Machine-learning system based on light could yield more powerful, efficient large language models

    I've suspected for a few years now that optoelectronics is where this is all headed. It's exciting to watch as important foundations are set on that path, and this was one of them.

    3
    Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'
    www.livescience.com Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'

    The obvious wealth of the tombs was based on the local production of copper, which was in great demand at the time to make bronze.

    Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'

    The Minoan style headbands from Egypt during the 18th dynasty is particularly interesting.

    0
    Large language models encode clinical knowledge
    www.nature.com Large language models encode clinical knowledge - Nature

    Med-PaLM, a state-of-the-art large language model for medicine, is introduced and evaluated across several medical question answering tasks, demonstrating the promise of these models in this domain.

    An update on Google's efforts at LLMs in the medical field.

    0
    GPT-4 API general availability and deprecation of older models in the Completions API
    openai.com GPT-4 API general availability and deprecation of older models in the Completions API

    GPT-3.5 Turbo, DALL·E and Whisper APIs are also generally available, and we are releasing a deprecation plan for older models of the Completions API, which will retire at the beginning of 2024.

    GPT-4 API general availability and deprecation of older models in the Completions API
    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
    kromem @lemmy.world
    Posts 12
    Comments 2K