It just sounds like the creator made a thing that wasn't what people wanted.
It just feels like the question to ask then isn't "but how do I get them to choose the thing despite it not being what they want?"
"Hard work goes to waste when you make a thing that people don't want" is ... true. But I would say it's a stretch to call it a "problem". It's just an unescapable reality. It's almost tautological.
Look at houses. You made a village with a diverse bunch of houses. But more than half of those, nobody wants to live in. Then "how do I get people to live in my houses?" "Build houses that people actually want to live in." Like, you can pay people money to live in your weird houses, sure, I just feel like you have missed the point of being an architect somewhat.
Can you tell if an AI is lying to you? A new paper claims that we essentially can do exactly that, at least under the right conditions. Another paper claims we can inject various sentiments into re…
Not many new fundamentals this time. Lots of product news however. It's always good to see interpretability making progress.
It slices. It dices. Or, at least, it sees, hears, talks, creates stunningly good images and browses the web. Welcome to the newly updated GPT-4. That’s all in two weeks. Throw in Microsoft 365 Cop…
We are about to see what looks like a substantial leap in image models. OpenAI will be integrating Dalle-3 into ChatGPT, the pictures we’ve seen look gorgeous and richly detailed, with the ability to generate pictures to much more complex specifications than existing image models. Before, the rule o...
Biggest thing for me is that the new 3.5 apparently plays competent chess - at a high amateur level - iff you prompt it just right. I would not have expected that, considering how Anarchy Chess ChatGPT's normal play is. Once again demonstrates that you can never prove the absence of a skill.
It works for the AI. Take a deep breath and work on this problem step-by-step was the strongest AI-generated custom instruction. You, a human, even have lungs and the ability to take an actual deep…
Once again not much new. On the regulation as well as capability front, things keep grinding along.
Last week there was a claim that Pi AI cannot be jailbroken. This week, a Twitter user has it giving steps to manufacture heroin and C4. So it goes.
Most interesting progress for me is the paper that notes that "grokking" is caused by the network picking up two separate circuits: one for memorization and one for generalization. But there's no inherent preference for generalization, it's just a blessing of scale: retrain the grokked network on a too-small dataset and it forgets its generalization.
We are, as Tyler Cowen has noted, in a bit of a lull. Those of us ahead of the curve have gotten used to GPT-4 and Claude-2 and MidJourney. Functionality and integration are expanding, but on a rel…
> We are, as Tyler Cowen has noted, in a bit of a lull.
Hi! As this place seems to be pretty idle, I'm gonna start posting Zvi's weekly roundup posts here. I'm a doomer and y'all are probably accelerationists, so this should hopefully generate juicy discussion. (Not this week though, things are pretty slow.)