Categories
Technology

First they came for the big beefy blokes

Around 500 years ago, humans began to be replaced by machines. If it wasn’t happening to you then it was progress beyond imagination.

It gave us cars, The Beatles, the moon landings, the iPhone, and a global population of 7.9 billion.

(And, um, the Facebook newsfeed.)

If it was happening to you then it probably sucked. But soon enough a new generation came along who never knew otherwise and maybe history remembered you as a machine-smashing Luddite just to rub it in.

Strong men were replaced by winches and cherry-pickers.

Guys who could handle themselves with a pike after 20 years of practice were shot dead by a 17-year old firing a rifle for the first time.

Weavers, natch, got replaced by weaving machines.

My father used to tell somewhat guiltily of how back in the punched card days of computers he’d spent a month learning how the old boys rolled steel – by eye – in a giant factory where caravan-sized vats of the molten stuff sloshed around.

He’d been tasked with replacing them all with a computer programme.

Which he then did.

Jobseeker’s allowance

For some reason smart people don’t think this can happen to them.

Are human brains really so special?

And does an Artificial Intelligence (AI) really need to do everything, or does it just need to do enough to do you and me out of a job?

A factory robot can’t learn ballet, but it can do more than enough to replace three or four assembly line workers.

An Amazon store festooned with cameras and sensors can do away with checkout staff, even if it can’t write an opera.

Why shouldn’t specialised AIs brute force their way through law, doctoring, computer programming, architecture…?

Rather chillingly, an AI is now even telling jokes, of a kind.

Those same smart people – one of whom, wearing another hat, I will do my best to pretend to be another day – will tell you that any of these instances is too limited to lead to a sentient AI.

Or that it is just pattern-matching (basically fancy search and replacing) or that its ability to learn is limited by compute, or that such stuff that an AI can do still isn’t very impressive anyway.

Well maybe not, but ‘still’ is having to do a lot of heavy lifting there.

How long have you got for ‘still’ to pass? The universe will wait for processing power to catch up. The machines are infinitely patient.

Source: Ark Invest

Besides we probably don’t need a super-sentient AI to take over the world in order for AI to be an existential threat to mankind.

Just humdrum AIs that each do one thing better than any human employee taking 90% of the jobs might start a revolution.

We can pull the triggers and bash the heads just fine for ourselves. We’re plenty good for that.

Whose line is it anyway?

The current approach to machine learning – the one making headlines for winning at Go and protein folding and all the rest of it – is (generously) only a couple of decades old.

As recently as my undergraduate degree (which sadly isn’t that recent, but I’m not that old) the field of AI hadn’t gone anywhere for 30 years and we were taught how to programme an AI by structuring logic from the ground up.

That approach was about as successful as anyone who has tried to reason with a six-month old baby would expect it to be.

Today though you let your machine learning routine of choice run rampant over as much data as you can give it for as long as you can afford, allowing it to mutate thousands or millions of times, and remorselessly kill anything that doesn’t come up with the goods.

Eventually you might end up with something that can deliver the output you want. Sort of indistinguishably from magic.

You can then put it to work doing its business forever after.

Sure, only one kind of output from one kind of data, more or less.

But I’m not convinced the same part of my brain that appreciates the colour blue is typing this article, either.

Perhaps there’s some mathematical hard limit to the interaction of compute, data sets, and the amount of energy available in our corner of the universe.

And perhaps that will stop something that sees, listens, talks, drives, and cracks jokes like us from emerging – whatever the fuck it’s actually doing inside its undead brain. The latter being left to philosophers to figure out.

Maybe by then all the philosophers will be algorithms, anyway.

I think, therefore it can

I propose a new Turing test.

If a purported being’s self-generated defence of its intelligence is indistinguishable from our own defence of our own intelligence, then the machine can think. In at least one way.

Watch this space!

And look out Jerry Seinfeld.