Killer Cars

Self-driving cars are coming for us!

<dramatic pause>

As an engineer, one of the things that irritates me most is when some public figure uses his or her pulpit to pronounce that humanity is approaching a dark hour where the machines we’ve created will soon be choosing, effectively, who lives and who dies. The implication is always that we need strong (read ‘suffocating’) regulation now, to ensure that the public is “protected”. And by that I mean to pander to, and allay the irrational fears of a general public that is generally distrustful of things they don’t understand. It is textbook fear-mongering.

But setting aside for a moment the absurdity of people whose knowledge (if it can be called that) of the underlying technology is generally limited to Hollywood’s latest thriller pretending to offer more than an uneducated opinion on the matter, let’s examine one of the most-often cited examples.

Usually it is framed like this: suppose a self-driving car is transporting its’ passengers along a city street. On one side a group of pedestrians is waiting patiently at an intersection for the light to turn. On the opposite side a tractor trailer is approaching in on-coming traffic. Suddenly, a small child runs out into the street in front of the autonomous vehicle. At this point the car has 1 of 3 options: swerve into the group of pedestrians (possibly killing some of them), swerve into on-coming traffic (possibly killing its passengers), or maintain its heading and hit the child. This is basically a modern adaptation of the trolley problem in introductory philosophy.

Some time ago, the CBC’s Jesse Hirsh used this exact scenario to essentially argue that a broad public consultation is required before companies developing this technology should be allowed to commercialize it… in the name of “transparency” of course.

The only problem is that this is a terrible example of the implications of increasingly autonomous systems. Why? Because the solution to this ‘problem’ should be obvious. No machine will ever be developed to react to an external scenario by increasing the likelihood of harm to its operator/occupant. That would be completely self-defeating; no one would buy it. This immediately rules out swerving into on-coming traffic, if for no other reason than the economic self-interest of the maker. But besides the obvious, the presenters of this scenario are committing a blatant fallacy by assuming that the autonomous vehicle is the only thing making a decision in the situation. Assuming that the vehicle has been programmed to obey the rules of the road, it has the right-of-way. It cannot and should not be expected to both anticipate an irrational or careless human putting itself in danger and then react by attempting to intervene when doing so would increase the danger to other humans that had no input. If you were wondering what any of this had to do with centrism until now, this is your answer: individual responsibility.

This is not to say that there aren’t legitimate discussions to be had about who ought to have access to, or dictate the utility of certain technology. But a common theme here is going to be this: I categorically reject the notion that all regulation is good regulation, or that bad regulation is better than no regulation, both of which seem to be common positions held by people under the guise of a concern for social impact. And this gets back to the issue I have with people making unreasonable assertions based on a flawed, or completely non-existent understanding of how the thing they’re criticizing works. Computers may be orders of magnitude faster than us lowly humans, but that does not make it inherently reasonable to impose on them a higher standard of moral judgement …their capacity is not infinite. The only thing the machine should be expected to do in this case is what any startled human driver with enough time to react would: slam on the brakes and hope for the best. Hit the child.