Forget the clutch, self-driving cars need ‘adjustable ethics’ set by owners

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car’s actions.

 

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

 

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

 

Self-drive is already here

 

With self-driving vehicles already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

 

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like adaptive cruise control, automatic braking, lane-keeping and parking assist.

 

People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

 

Are they safe?

 

After almost 500,000km of on-road trials in the US, Google’s test cars have not been in a single accident while under computer control.

 

 

 

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

 

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

 

This is an adaptation of the “trolley problem” that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

 

An astute reader will point out that, under normal conditions, the car’s collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

 

Story continues on page 2. Please click below.

COMMENTS