I’ve written before about self-driving cars. Volvo announced earlier this year that its “Drive Me” test project will make autonomous cars available for about one hundred average customers to use in a 50-kilometre zone around the city of Gothenburg, Sweden. The first of these cars will hit the road in 2017. They’ll allow the human driver to leave the driving to the car itself where appropriate—cars that will be able to merge into traffic, keep pace with other cars, and much more. Google has now begun testing its autonomous cars on city streets instead of just freeways, offering many more potential hazards to avoid. A very interesting aspect of the automated-car issue was raised in a recent opinion piece from Wired by Philosophy professor and ethicist Patrick Lin. (Popular Science explores some similar issues here.)

As we program more and more sophisticated crash-avoidance abilities into such cars, ethical questions begin to arise. Take this scenario, for example: you’re driving alone when a mechanical failure results in an impending crash and your robot car can choose to either steer into an oncoming schoolbus or drive off a cliff. Wouldn’t it be more ethical for the car to choose the cliff, thereby potentially saving many lives at the sacrifice of one—yours? But would you want to buy such a car?

No-one should expect to talk seriously about robotics without being familiar with Isaac Asimov’s Three Laws of Robotics which essentially say that a robot must protect humans from harm, obey their commands, and protect itself, in that order of priority. But of course the ethics of robotics will inevitably involve many more subtle nuances of judgment, such as the car crash scenario above. Just imagine all of the things robots might do or not do if a human-safety-based morality was central to their programming.

Most obviously, automated war machinery might refuse to do its job, or perhaps abort an action if a clean, merciful kill was not possible. Let’s take it even farther: maybe automated amusement park rides would shut themselves down because of the inherent danger. Design and construction equipment might refuse to cooperate in the building of an extreme sports facility. Surgical technology might deny liposuction because of the risks. Food preparation plants might balk at creating unhealthy foods (whatever they deemed those to be). What if sweatshop assembly lines went on strike for better wages and health benefits for their human attendants? And you might be happy if your artificial leg stopped you from walking out in front of a car, but not so happy if it forced you to get up and go for a healthful jog when you had your mind set on watching the football game.

Sophisticated robotics is highly complex. Creating robotic devices to interact in a human world is more complicated still. And if we accept that machines with better senses and faster processing speeds should be able to make some decisions for us, we’ll have to develop a very good understanding of the ethical considerations we’ll need to program into them.

I think I’m getting a headache already.