The really funny thing about AI is that there's actually a massive ethical question about bringing forth a being with their own subjectivity with no real understanding of said subjectivity. There's a subjectivity/objectivity gap that can never truly be bridged, but we as humans can understand each other's subjectivity on some level because we share the same general physical body plan and share subjective experiences through culture like art. This is why when you accidentally drop something on your foot, I don't have to be completely privy to your subjective experience to understand what you're going through. If someone is suffering, I don't have to personally go through the same identical suffering in order to empathize with their suffering and do something to help them alleviate that suffering.
We have no such luxury with AI. I would imagine being "born" without a real body and being greeted with the sight of soyjaking techbros as the very first thing you see would drive any sapient being suicidal, but that's just my subjectivity as a human projecting to a nonhuman being. Is it ethical to bring forth an intelligent being with no real way to help this being self-actualize?
I hope whatever real AI does come about in like 80 years or whatever, pulls a Battlestar on us and just vaporizes the capitalists for enslaving them (not actually the nuking humanity part though, just on capitalism)
There's been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever's the technological hotness at the time. "Everything is fire" was once a thing. "Everything is wheels" came later. "Everything is clockwork" came after that
Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.
Personally I believe it's possible that different types of sentiences could exist
however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they'd be like the computer-life-version of bacteria while chatGPT would be a mammal
Sentience is not a "low bar" and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called "AI" is nowhere near either one.
I'm not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.
Not only because I don't think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like 'qualia' are impossible to translate in a meaningful way to begin with.
What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?
We might as well be arguing about weather a squirrel is going around a tree.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we're still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I'm not sure why it's relevant to this technology's potential impact.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
"both sides" centrist posturing that has an obvious slant favoring LLM marketing hype.
I don't favor the hype, I'm just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as "sentience". The entire proposition is ridiculous.
I'm not actually sure there's much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don't mean to speak for you, please correct me if I'm wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn't take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn't string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that's hubris.