The science of human consciousness offers new ways of gauging machine minds – and suggests there’s no obvious reason computers can’t develop awareness.
We could engineer artificial flight without having a precise understanding of natural flight.
I think we don't need to understand how consciousness develops (unless you want to recreate exactly that developing process). But we do need to be able to define what it is, so that we know when to check the "done"-box. Wait, no. This, too, can be an iterative process.
So we need some idea what it is and what it isn't. We tinker around. We check if the result resembles what we intended. We refine our goals and processes, and try again. This will probably lead to a co-evolution of understanding and results. But a profound understanding isn't necessary (albeit very helpful) to get good results.
Also, maybe, there can be different kinds of consciousness. Maybe ours is just one of many possible. So clinging to our version might not be very helpful in the long run. Just like we don't try to recreate our eye when making cameras.
"Conscious". What even is it? Look around the animal kingdom and pick your own definition. Reacting to stimulus? Pattern recognition? Pattern reaction? A bunch of vectors between words?
I think of this as a function. Once models are advanced enough, will the question even matter? Once it feels like it can empathize, be curious, respect and react to another's feedback. These qualities, I value above most and don't even see from other humans.
Consciousness can't be measured anyways. I know I'm conscious and that is everything I can know. There is no distinction between being conscious and simulating it. I cannot proof the consciousness of people around me any more than I could proof it for people in my dreams, animals or any given AI.