Categories
Technology

Would you rather be killed by a robot?

Few of us want to die, but we have a greater aversion to going one way than another.

A classic example is air travel. Despite being statistically far safer than driving, many more people are afraid of flying and it is air plane crashes that make the nightly news.

Of course the safety of air travel is what makes a rare calamity headline-worthy. Just another car crash caused by a sleepy, drunk, or texting driver will be lucky to make the local papers.

But there’s also something else going on.

Drivers – and perhaps even passengers – seem to accept their agency in a highway accident in a way that many airplane travellers do not. We feel helpless at 35,000 feet, but we suspend our disbelief. We’re equally helpless at 80mph on the motorway should a lorry jack-knife in front of us, but a few seconds before we felt like we were the kings of the road.

The difficulty in making an all-terrain level 5 autonomous car that’s fit for purpose has curbed the enthusiasm of those of who thought (/hoped!) we were on the edge of a self-driving revolution.

But the squishy calculus that we apply to fatal accidents could hold back a rollout even if sufficiently viable technology comes along.

Do Androids dream of L-plates?

What’s sufficient?

In the US, the National Highway Traffic Administration estimated that 36,750 people were killed in US traffic crashes in 2018.

If the entire fleet had switched over to autonomous vehicles on January 1 2019 and the number of deaths had subsequently dropped by one to 36,749 would it be celebrated as a success?

Unlikely – although the one extra person who lived to read that statistic at the end of the year might disagree.

Even leaving aside the many complicating factors absent from this illustration (noise in the statistical data, unintended effects such as greater alcoholism as everyone could now drink and ‘drive’, an increase in drug use or suicide among newly-unemployed truck drivers) we intuitively know the US public isn’t going to accept 36,749 robot-mediated driving deaths a year.

I don’t think the American public would accept 749 annual fatalities from buggy robot driving.

Perhaps not even 49.

Obviously this makes no logical sense, but there it is.

These questions will only amplify as AI migrates further off the desktop and the cloud and visibly into our daily lives.

  • Would you be happy if a robot lifeguard saved three elderly swimmers in difficulty over your six-year old child?
  • Would you chalk it up to statistically predictable bad luck if an AI screen for breast cancer gave you a false negative, even if you’d stood a lower chance of such an erroneous result than had a friendly-faced radiologist seen the same slide?
  • Should a robot driver head towards a fatal collision with an oncoming out-of-control vehicle, killing several, or instead swerve to crush a baby in a pram?

That last example is regularly trotted out in the insurance industry, where such issues aren’t just interesting after-dinner talking points.

Someone will have to be responsible if someone is going to pay.

But for most of us, the questions are mushier. We recoil at their asking, but the machines need to know what to do.

One option is to avoid AI, even if doing so leads to worse outcomes and perhaps tens of thousands of preventable deaths.

Another is to live in a world where we come to accept the odd random destruction or death from poor or faulty or simply inexplicable AI decisions in the same way ancient people sighed and chalked up heart attacks or plagues as evidence of the whims of capricious gods.

That sounds defeatist. But it’s arguably better than 36,750 Americans dying every year at the hands of human drivers because nobody wants to be killed by a bug.