A robotic bomb was used to kill a suspected sniper in the recent shootout in Dallas where 5 police persons were killed and 20 were injured. It is for the first time a robotic bomb was used by a law enforcement agency to kill a human being in the US.
Pertinent questions are being raised in this USA Today article. How far will we go to use robots in such situations? Should it be left on the artificial intelligence software embedded into the robot’s chip to decide whether a human being should be killed or not? The robotic bomb that was used in Dallas was completely manually controlled. If the robotic bomb hadn’t been used more lives would have been put at risk.
I think this is not just a robots-killing-people-or-not case. More than ethics, what needs to be pondered is how well to control these robots. Machines and intelligent software are going to be an integral part of warfare and law and order and such a state is inevitable. More important question is how well these machines and software applications can be used.
There is always going to be technological gaps between human beings and countries. When a drone is sent to an Afghan territory to drop bombs there is already a technological gap. When bombs were dropped on Hiroshima and Nagasaki there was already a technological gap and ethical questions still manifested.
The question of ethics will arise when robots and machines become intelligent enough to develop feelings.