If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.
And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don't let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.
Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you're depriving the good of it as well. My point earlier is to show that there are good uses for these things.
I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.
It's like inventing a new biological weapon.
By not creating it, you are not depriving any decent person of anything that is actually good.
It's more like we're giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.
Right, because self-driving cars have been great at correctly identifying things.
And those LLMs have been following their rules to the letter.
We really need to let go of our projected concepts of AI in the face of what's actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.
In any real world deployment of killer drones, there's going to be an acceptable false positive rate that's been signed off on.