At the same time, I feel like we shouldn't let that happen because imagine if he actually succeeds? And then we just have immortal crackhead Lex Luthor with a hallucinating ChatGPT whispering further delusions directly into his brain. That can't be good for any of us.
Aspects of tech are often correctly predicted in SciFi going all the way back to Lucian writing about a ship of men flying up to the moon in the 2nd century.
But surrounding what they often get right the authors always get things wrong too. For example, contrary to Lucian's ideas, in reality the ship of men that flew up to the moon didn't find a race of human like aliens that were only men who could carry children and had a bunch of gay sex with the men of Apollo-11.
TL;DR: Correctly predicting a technology in a story doesn't mean correctly predicting the social impact and context for that technology.
Advertisers would absolutely love to augment your reality with ads or even just the ability to accurately confirm you've actually watched a traditional ad along with how you "felt" about it.
At that point people would absolutely sign up for free implants so they can access ad supported services that may otherwise become unaffordable within a society further strip mined of wealth by the then trillionaire class.
Knowing advertisers, at least here in the US, they would bypass confirming you watched ads by just beaming them straight to your implant and making it impossible to get rid of the ads.
Advertisers would absolutely love to augment your reality with ads or even just the ability to accurately confirm you've actually watched a traditional ad along with how you "felt" about it.
Your reality is already augmented with ads most places you look, and advertisers already do have significant ability to accurately identify how a sample feels about the ads.
Most don't bother because they don't actually care, and because it's easier and cheaper to just run an ad mix self-optimizing around sales results or conversions than to try and over-engineer the advertising impact.
Anyone betting on neural implants to make money because of 'advertising' is going to lose a lot of money themselves.
Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.
Former Neuralink employees as well as experts in the field have alleged that the company pushed for an unnecessarily invasive, potentially dangerous approach to the implants that can damage the brain (and apparently has done so in animal test subjects) to advance Musk’s goal of merging with AI.
The letter warned that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and went on to ask: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
If the intravascular approach can restore key functioning to paralyzed patients, and also avoids some of the safety risks that come with crossing the blood-brain barrier, such as inflammation and scar tissue buildup in the brain, why opt for something more invasive than necessary?
Which perhaps helps make sense of the company’s dual mission: to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”
Watanabe believes Neuralink prioritized maximizing bandwidth because that serves Musk’s goal of creating a generalized BCI that lets us merge with AI and develop all sorts of new capacities.
The original article contains 3,312 words, the summary contains 220 words. Saved 93%. I'm a bot and I'm open source!