Categories
Technology

First they came for the big beefy blokes

Around 500 years ago, humans began to be replaced by machines. If it wasn’t happening to you then it was progress beyond imagination.

It gave us cars, The Beatles, the moon landings, the iPhone, and a global population of 7.9 billion.

(And, um, the Facebook newsfeed.)

If it was happening to you then it probably sucked. But soon enough a new generation came along who never knew otherwise and maybe history remembered you as a machine-smashing Luddite just to rub it in.

Strong men were replaced by winches and cherry-pickers.

Guys who could handle themselves with a pike after 20 years of practice were shot dead by a 17-year old firing a rifle for the first time.

Weavers, natch, got replaced by weaving machines.

My father used to tell somewhat guiltily of how back in the punched card days of computers he’d spent a month learning how the old boys rolled steel – by eye – in a giant factory where caravan-sized vats of the molten stuff sloshed around.

He’d been tasked with replacing them all with a computer programme.

Which he then did.

Jobseeker’s allowance

For some reason smart people don’t think this can happen to them.

Are human brains really so special?

And does an Artificial Intelligence (AI) really need to do everything, or does it just need to do enough to do you and me out of a job?

A factory robot can’t learn ballet, but it can do more than enough to replace three or four assembly line workers.

An Amazon store festooned with cameras and sensors can do away with checkout staff, even if it can’t write an opera.

Why shouldn’t specialised AIs brute force their way through law, doctoring, computer programming, architecture…?

Rather chillingly, an AI is now even telling jokes, of a kind.

Those same smart people – one of whom, wearing another hat, I will do my best to pretend to be another day – will tell you that any of these instances is too limited to lead to a sentient AI.

Or that it is just pattern-matching (basically fancy search and replacing) or that its ability to learn is limited by compute, or that such stuff that an AI can do still isn’t very impressive anyway.

Well maybe not, but ‘still’ is having to do a lot of heavy lifting there.

How long have you got for ‘still’ to pass? The universe will wait for processing power to catch up. The machines are infinitely patient.

Source: Ark Invest

Besides we probably don’t need a super-sentient AI to take over the world in order for AI to be an existential threat to mankind.

Just humdrum AIs that each do one thing better than any human employee taking 90% of the jobs might start a revolution.

We can pull the triggers and bash the heads just fine for ourselves. We’re plenty good for that.

Whose line is it anyway?

The current approach to machine learning – the one making headlines for winning at Go and protein folding and all the rest of it – is (generously) only a couple of decades old.

As recently as my undergraduate degree (which sadly isn’t that recent, but I’m not that old) the field of AI hadn’t gone anywhere for 30 years and we were taught how to programme an AI by structuring logic from the ground up.

That approach was about as successful as anyone who has tried to reason with a six-month old baby would expect it to be.

Today though you let your machine learning routine of choice run rampant over as much data as you can give it for as long as you can afford, allowing it to mutate thousands or millions of times, and remorselessly kill anything that doesn’t come up with the goods.

Eventually you might end up with something that can deliver the output you want. Sort of indistinguishably from magic.

You can then put it to work doing its business forever after.

Sure, only one kind of output from one kind of data, more or less.

But I’m not convinced the same part of my brain that appreciates the colour blue is typing this article, either.

Perhaps there’s some mathematical hard limit to the interaction of compute, data sets, and the amount of energy available in our corner of the universe.

And perhaps that will stop something that sees, listens, talks, drives, and cracks jokes like us from emerging – whatever the fuck it’s actually doing inside its undead brain. The latter being left to philosophers to figure out.

Maybe by then all the philosophers will be algorithms, anyway.

I think, therefore it can

I propose a new Turing test.

If a purported being’s self-generated defence of its intelligence is indistinguishable from our own defence of our own intelligence, then the machine can think. In at least one way.

Watch this space!

And look out Jerry Seinfeld.

Categories
Society Technology

A singular feeling

“I’ve had enough,” said Simon the other day in a lockdown Zoom chat. “I just want things to stop for a while.”

“God I miss the 1990s,” he added.

“It’s true,” I said. “Nothing happened in the 1990s.”

“Maybe the PlayStation.”

Like a lot of people, I’ve got the sense the world has been going slightly crazy in the past few years.

The financial crisis. A bear market. Online warfare. Trump. Brexit. Russian bots. A bear market, again. A whipsaw rally.

A virus that flies commercial. Around the world in a month, not a year. A horror story you see coming, between the photos of your aunt’s cat on your social media feed.

I realised I’ve been thinking about this all wrong.

This isn’t an overwhelming number of things happening.

It’s all the same thing happening.

It’s exactly what my friend Simon says. The world is speeding up.

It took over 500 years to go from Gutenberg’s printing press to IBM’s electric typewriter.

It took 25 years to go from the electric typewriter to the Compaq desktop PC.

15 years from there to the Imac. Ten years from iMac to iPhone.

Five years from mobile phone calls to Facebook to WhatsApp.

People aren’t shouting at each other on Twitter because they have gotten angrier.

They’re shouting on Twitter because it exists, and before it didn’t.

People don’t disagree with you because they know better.

Everyone disagrees because nobody is sure of anything.

The government lied. Wall Street lied. The news lied. Facebook lied. Now everything might be a lie.

And faster and faster it goes.

This is how we make way for the singularity.

Not with a bang. Not a whimper.

A whirligig.

Categories
Technology

Would you rather be killed by a robot?

Few of us want to die, but we have a greater aversion to going one way than another.

A classic example is air travel. Despite being statistically far safer than driving, many more people are afraid of flying and it is air plane crashes that make the nightly news.

Of course the safety of air travel is what makes a rare calamity headline-worthy. Just another car crash caused by a sleepy, drunk, or texting driver will be lucky to make the local papers.

But there’s also something else going on.

Drivers – and perhaps even passengers – seem to accept their agency in a highway accident in a way that many airplane travellers do not. We feel helpless at 35,000 feet, but we suspend our disbelief. We’re equally helpless at 80mph on the motorway should a lorry jack-knife in front of us, but a few seconds before we felt like we were the kings of the road.

The difficulty in making an all-terrain level 5 autonomous car that’s fit for purpose has curbed the enthusiasm of those of who thought (/hoped!) we were on the edge of a self-driving revolution.

But the squishy calculus that we apply to fatal accidents could hold back a rollout even if sufficiently viable technology comes along.

Do Androids dream of L-plates?

What’s sufficient?

In the US, the National Highway Traffic Administration estimated that 36,750 people were killed in US traffic crashes in 2018.

If the entire fleet had switched over to autonomous vehicles on January 1 2019 and the number of deaths had subsequently dropped by one to 36,749 would it be celebrated as a success?

Unlikely – although the one extra person who lived to read that statistic at the end of the year might disagree.

Even leaving aside the many complicating factors absent from this illustration (noise in the statistical data, unintended effects such as greater alcoholism as everyone could now drink and ‘drive’, an increase in drug use or suicide among newly-unemployed truck drivers) we intuitively know the US public isn’t going to accept 36,749 robot-mediated driving deaths a year.

I don’t think the American public would accept 749 annual fatalities from buggy robot driving.

Perhaps not even 49.

Obviously this makes no logical sense, but there it is.

These questions will only amplify as AI migrates further off the desktop and the cloud and visibly into our daily lives.

  • Would you be happy if a robot lifeguard saved three elderly swimmers in difficulty over your six-year old child?
  • Would you chalk it up to statistically predictable bad luck if an AI screen for breast cancer gave you a false negative, even if you’d stood a lower chance of such an erroneous result than had a friendly-faced radiologist seen the same slide?
  • Should a robot driver head towards a fatal collision with an oncoming out-of-control vehicle, killing several, or instead swerve to crush a baby in a pram?

That last example is regularly trotted out in the insurance industry, where such issues aren’t just interesting after-dinner talking points.

Someone will have to be responsible if someone is going to pay.

But for most of us, the questions are mushier. We recoil at their asking, but the machines need to know what to do.

One option is to avoid AI, even if doing so leads to worse outcomes and perhaps tens of thousands of preventable deaths.

Another is to live in a world where we come to accept the odd random destruction or death from poor or faulty or simply inexplicable AI decisions in the same way ancient people sighed and chalked up heart attacks or plagues as evidence of the whims of capricious gods.

That sounds defeatist. But it’s arguably better than 36,750 Americans dying every year at the hands of human drivers because nobody wants to be killed by a bug.