I saw an interesting article in Bloomberg about self-driving cars: how should they be programmed to respond to situations where a choice must be made between the occupants’ safety and other vehicles or pedestrians? Do you crash into the tree or veer into the sidewalk full of people?

Will future buyers demand selfish versus altruistic options? Will insurers shun vehicles that choose the most costly disaster? Will government demand approval authority over the code? Will Asimov’s laws be written into the Uniform Commercial Code?

When the military begins deploying autonomous drones, smart munitions or a more clever class of mine, will they need to be programmed for some acceptable level of collateral damage or force proportionality as they go about their lethal business? Will machines need to choose when troops in the field must bear more risk to preserve the lives of non-combatants? Will computer intelligence some day be called upon to make some of the terrible decisions military officers have always needed to make?

Will machine ethics become an important new branch of philosophy?