Categories
Technology

First they came for the big beefy blokes

Around 500 years ago, humans began to be replaced by machines. If it wasn’t happening to you then it was progress beyond imagination.

It gave us cars, The Beatles, the moon landings, the iPhone, and a global population of 7.9 billion.

(And, um, the Facebook newsfeed.)

If it was happening to you then it probably sucked. But soon enough a new generation came along who never knew otherwise and maybe history remembered you as a machine-smashing Luddite just to rub it in.

Strong men were replaced by winches and cherry-pickers.

Guys who could handle themselves with a pike after 20 years of practice were shot dead by a 17-year old firing a rifle for the first time.

Weavers, natch, got replaced by weaving machines.

My father used to tell somewhat guiltily of how back in the punched card days of computers he’d spent a month learning how the old boys rolled steel – by eye – in a giant factory where caravan-sized vats of the molten stuff sloshed around.

He’d been tasked with replacing them all with a computer programme.

Which he then did.

Jobseeker’s allowance

For some reason smart people don’t think this can happen to them.

Are human brains really so special?

And does an Artificial Intelligence (AI) really need to do everything, or does it just need to do enough to do you and me out of a job?

A factory robot can’t learn ballet, but it can do more than enough to replace three or four assembly line workers.

An Amazon store festooned with cameras and sensors can do away with checkout staff, even if it can’t write an opera.

Why shouldn’t specialised AIs brute force their way through law, doctoring, computer programming, architecture…?

Rather chillingly, an AI is now even telling jokes, of a kind.

Those same smart people – one of whom, wearing another hat, I will do my best to pretend to be another day – will tell you that any of these instances is too limited to lead to a sentient AI.

Or that it is just pattern-matching (basically fancy search and replacing) or that its ability to learn is limited by compute, or that such stuff that an AI can do still isn’t very impressive anyway.

Well maybe not, but ‘still’ is having to do a lot of heavy lifting there.

How long have you got for ‘still’ to pass? The universe will wait for processing power to catch up. The machines are infinitely patient.

Source: Ark Invest

Besides we probably don’t need a super-sentient AI to take over the world in order for AI to be an existential threat to mankind.

Just humdrum AIs that each do one thing better than any human employee taking 90% of the jobs might start a revolution.

We can pull the triggers and bash the heads just fine for ourselves. We’re plenty good for that.

Whose line is it anyway?

The current approach to machine learning – the one making headlines for winning at Go and protein folding and all the rest of it – is (generously) only a couple of decades old.

As recently as my undergraduate degree (which sadly isn’t that recent, but I’m not that old) the field of AI hadn’t gone anywhere for 30 years and we were taught how to programme an AI by structuring logic from the ground up.

That approach was about as successful as anyone who has tried to reason with a six-month old baby would expect it to be.

Today though you let your machine learning routine of choice run rampant over as much data as you can give it for as long as you can afford, allowing it to mutate thousands or millions of times, and remorselessly kill anything that doesn’t come up with the goods.

Eventually you might end up with something that can deliver the output you want. Sort of indistinguishably from magic.

You can then put it to work doing its business forever after.

Sure, only one kind of output from one kind of data, more or less.

But I’m not convinced the same part of my brain that appreciates the colour blue is typing this article, either.

Perhaps there’s some mathematical hard limit to the interaction of compute, data sets, and the amount of energy available in our corner of the universe.

And perhaps that will stop something that sees, listens, talks, drives, and cracks jokes like us from emerging – whatever the fuck it’s actually doing inside its undead brain. The latter being left to philosophers to figure out.

Maybe by then all the philosophers will be algorithms, anyway.

I think, therefore it can

I propose a new Turing test.

If a purported being’s self-generated defence of its intelligence is indistinguishable from our own defence of our own intelligence, then the machine can think. In at least one way.

Watch this space!

And look out Jerry Seinfeld.

Categories
Technology

Nike buys the future of NFTs: when not if

Once upon a time, sprawling public companies ‘diworseified’ into vineyards and ill-capitalised finance divisions when they had too much money to spend.

But that’s soooo 1980s.

Now if you’re a leader of a big public company, the go-to move – and a great way to rile up anyone over 35-years old – is to purchase a digital assets start-up.

Only this morning a friend of mine (himself extremely tech savvy but on the wrong side of 45) rolled his eyes at news that Nike had just purchased an NFT collectibles studio for an undisclosed sum.

According to TechCrunch, the purchase of RTFKT:

… comes at an opportune time for the studio; RTFKT is currently behind one of the most talked-about NFT project drops of the month – a sweeping avatar partnership with artist Takashi Murakami called CloneX.

Since its initial drop less than three weeks ago, the project has already seen nearly $65 million in transaction volume according to crypto tracker CryptoSlam.

Terms of the acquisition were not disclosed. The startup raised an $8 million seed round back in May led by Andreessen Horowitz that valued the company at $33.3 million.

TechCrunch, 13 December 2021

My friend is dismayed by this direction of travel.

“I assumed at some point that shit would get real, even for Gen-Omicron!” he bemoaned.

Together in electric dreams

My friend is destined to a remaining lifetime of disappointment, at least on this front.

(The relative prospects for his hairline are more positive, if I recall his father’s head of hair correctly).

Because why would anybody expect “shit to become more real”?

Ever?

My first computer was a Sinclair ZX81. I whiled away hours making an asterisk jump over the character ‘A’, and programming the computers in WH Smiths to print OWAIN WAS HERE all down the screen.

How quickly we forget.

By 1991 I was chatting in real-time to computer science students in Hyderabad, India, via a multi-user text game.

Yet my father – an IT professional of two decades standing by that point – still couldn’t see why he’d ever need a non-work email address.

Today my girlfriend does all her work on Zoom and shops almost entirely online, while my friend’s kids pester him for v-bucks.

Needless to say my friend told me the latter by messaging on a smartphone.

We’re on a road to nowhere

The only direction of travel for humanity for the past 40 years has been more digital and more virtual and less ‘real’. (A term which will pretty soon become meaningless in this context.)

And technology can – as far as we’re aware – improve indefinitely, at least on human timescales.

Short of some great filter party-pooping, why would anyone expect this trend to reverse?

Extrapolating, our future will eventually mostly be ‘unreal’. All that’s up for debate is the timeframe.

In that context, Nike buying an NFT collectible studio is hardly something to roll your eyes at – unless you question its timing.

Especially as we don’t know the price.

Given the publicity generated by irate Boomers swapping links to the news story, this kind of acquisition might even pay for itself in PR terms right now.

Keep on (make) believing

Of course knowing the price often makes things worse when it comes to evaluating the NFT space.

I’ve found that people who rail against a $11.75 million cryptopunk by ranting about the spurious ephemeral nature of NFTs don’t put up too much protest if you ask them how they’d feel if the same cryptopunk cost $11.75 (and no millions!)

Their complaint is not really about the technology. It’s sticker shock at the pricing.

Understandable. I can’t imagine many of today’s blockbuster NFTs will keep their value long-term, either.

And by the same token (boom boom), most of the bullish mania around blockchain is driven by insane high prices, too.

I’d guesstimate that 99% of crypto-owners wouldn’t be owners if a Bitcoin still cost $0.001, for instance.

And that includes me.

But none of this price discussion says anything about whether NFTs – unique digital assets – are an important technology with a big future.

You might not fancy sporting your own unique digital sneakers in a game like Fortnite. But your grandkids won’t think twice.

In our mostly virtual destiny, do you agree scarce and unique assets would have more value than commodities that are infinitely replicable?

You do?

(Because how could they not? Even if only by a little bit).

Then congratulations, you’re also an NFT believer. You can stop being so angry now.

Categories
Technology

VR’s killer app: Unreality

Tron (1982): Why should all realities (fail to) look like our reality?

Benedict Evans sees virtual reality going back into hibernation:

We tried VR in the 1980s, and it didn’t work. The idea may have been great, but the technology of the day was nowhere close to delivering it, and almost everyone forgot about it. Then, in 2012, we realised that this might work now. Moore’s law and the smartphone component supply chain meant that the hardware to deliver the vision was mostly there on the shelf. Since then we’ve gone from the proof of concept to maybe three quarters of the way towards a really great mass-market consumer device.

However, we haven’t worked out what you would do with a great VR device beyond games (or some very niche industrial application), and it’s not clear that we will. We’ve had five years of experimental projects and all sorts of content has been tried, and nothing other than games has really worked.

Benedict Evans, The VR Winter

Like the supporter of a perennially mid-ranked football team, I too get my hopes up about VR every dozen years or so.

In the long-term, as with AI that passes the Turing Test, ubiquitous VR seems inevitable.

Why wouldn’t we spend our time in VR if some of these were true:

  • It was prettier than reality
  • It was easy to get things done there
  • It was possible to get the impossible done there (fly, visit Mars, have sex with your favourite Hollywood crush)
  • It was safer
  • It emitted less CO2

Like AI, VR also runs aground on the shores of reality.

The journey from “Wow!” to “Wait! What?” for 2020’s incarnation of Amazon Alexa is about two minutes of interaction.

It’s even shorter with VR.

It seems clear that throwing Moore’s Law at the problem will eventually bodge together a solution.

The snag is you can’t be confident the Moon won’t have crashed into the Earth before then.

Don’t believe the hype

I think VR has a branding problem.

Reality is today not easy impossible to recreate, whether it’s a sassy live-in helper named Alexa or a virtual reality New York.

Don the headset to try any good VR game, and you’re bedazzled by the transportation.

If the game is great – Half-Life: Alyx is the state-of-the-art – then there’s at least one or two mechanics that suggest we’re finally on the cusp of our William Gibson future.

But then you poke a non-player character in the face and he says nothing.

Or you can open a drawer but not a door.

Or you bump into your sofa.

Wait! What?

A moment ago you were there – somewhere.

Now you’re a child with a wind-up toy monkey clanging cymbals, frustrated it’s already run out of tricks.

Game over

I think Virtual Reality has a branding problem.

If the label on your tin promises ‘reality’, it’s always going to smell off when you open it.

True, half my wish list for VR involves recreating reality.

But more than half of it doesn’t.

Imagine if the first video game developers had tried to recreate a photo-realistic Wimbledon before they’d got started with Pong.

Or if Space Invaders really had to look like the world was being invaded by aliens before it shipped.

Instead, their creators used what they had to make super-stylised reductions of reality – and in time games did take over the world.

You might argue today’s VR games are the baby step equivalents of Pong or Space Invaders.

I disagree.

Today’s VR developers try to use the graphical fidelity you’d get in the best of today’s games to conjure up their virtual realities.

Doing so, they set up their own failure:

The game can’t generate its world on demand. This means every playable option has to be worked out in advance by a game designer. Which means there can’t be many options. Which isn’t like our reality.

The game can’t visualise its world on demand. This means every environment has to be created by game artists, or at best compiled from a limited set of props. Because game artists, money, and storage capacity are all finite, this means the world can’t be very large. Slash it has to be tiny. Which isn’t like our reality.

The game world doesn’t behave like our world. This means that while it might look like our world for an instant, an instant later it doesn’t. So it’s not virtual reality. It’s not even camping in the pit of the uncanny valley. To be honest it’s not even looking at the uncanny valley on a map.

A solution: Virtual unreality

What if VR designers stopped trying to wow us with the theatrics of recreating a real-world office, a lifelike shark cage, or of running away from a flesh-and-blood zombie, and instead focused on creating their own realities?

Simple shapes. Limited colours. Narrower rules. Not many things you can do.

What if it was less 2018’s Annihilation and more 1979’s Asteroids?

I’m not suggesting someone create a VR shoot-em-up like Asteroids. They already have.

I’m suggesting tackling the problem at a higher level.

Maybe your virtual unreality (VU) world has rooms. Maybe it has doors and floors. Maybe it has some physics.

Maybe it contains ten objects. Maybe just three.

And that’s all it has.

But these three to ten things all interact in your VU in a completely internally consistent way.

Your VU engine can cover any eventually, which is important because it means it can generate VU on-demand, on-the-fly.

Say I’m walking towards my real-life sofa – the VU can put something in my way, or curve an in-VU pathway to take me away from it.

If that hack takes me into a new space that previously wasn’t on the VU engine’s map, it doesn’t matter because the alternate world is simple enough that the engine can adjust accordingly, and the dance I did doesn’t violate any internal rules.

You’re in a place. Nothing is wrong, because you can’t do much – but everything you can do you can do.

Why not start with these simple building blocks? Work outwards from there?

Less Matrix. Less Tron, even.

More Flatland.

Manic Miner (1983): Nothing like mining. Utterly immersive.
Categories
Society Technology

A singular feeling

“I’ve had enough,” said Simon the other day in a lockdown Zoom chat. “I just want things to stop for a while.”

“God I miss the 1990s,” he added.

“It’s true,” I said. “Nothing happened in the 1990s.”

“Maybe the PlayStation.”

Like a lot of people, I’ve got the sense the world has been going slightly crazy in the past few years.

The financial crisis. A bear market. Online warfare. Trump. Brexit. Russian bots. A bear market, again. A whipsaw rally.

A virus that flies commercial. Around the world in a month, not a year. A horror story you see coming, between the photos of your aunt’s cat on your social media feed.

I realised I’ve been thinking about this all wrong.

This isn’t an overwhelming number of things happening.

It’s all the same thing happening.

It’s exactly what my friend Simon says. The world is speeding up.

It took over 500 years to go from Gutenberg’s printing press to IBM’s electric typewriter.

It took 25 years to go from the electric typewriter to the Compaq desktop PC.

15 years from there to the Imac. Ten years from iMac to iPhone.

Five years from mobile phone calls to Facebook to WhatsApp.

People aren’t shouting at each other on Twitter because they have gotten angrier.

They’re shouting on Twitter because it exists, and before it didn’t.

People don’t disagree with you because they know better.

Everyone disagrees because nobody is sure of anything.

The government lied. Wall Street lied. The news lied. Facebook lied. Now everything might be a lie.

And faster and faster it goes.

This is how we make way for the singularity.

Not with a bang. Not a whimper.

A whirligig.

Categories
Society Technology

Why you’re doomed to techno-befuddlement by the time you’re 70

A friend aspires to be as adept at using consumer technology in 30 years – when he’ll be in his late 70s – as he is today.

This will be me and my friend in 30 years’ time. Children will smirk at us before being re-submerged in their entertainment vats.

He believes many older people have been lazy about keeping up with the underlying advances of the past 50 years.

And he argues that because he works in software engineering and makes an effort to understand the principles behind new technology, he will be in a good position to achieve his goal of being able to program his semi-bio-engineered cyborg gardener using mind control as easily as his grandchildren in the year 2054.

I believe he’s missing the point, and we’ve had many debates.

Silver surfer wipeout

We first got onto this topic after my friend expressed frustration with his septuagenarian mother, who was struggling to read her online banking webpage.

She’d had the Internet for years! Why couldn’t she just fill in the boxes and click the right buttons?

Because, I argued, she didn’t grow up with it. It’s not in her bones, or her muscle memory, or the appropriate synaptic connections.

While most older people I know have basically got the hang of parsing webpages by now, it was fascinating watching them try in their early encounters with the Internet.

Very often they’d start reading from the top left. They’d scan right, then return to the left hand side, drop their eyes down a bit, and continue the process.

They were reading the webpage like a book!

Ever wondered why anyone clicked on banner adverts or got confused about content versus text ads in the sidebar?

Now you know.

Reading a webpage like a book is bonkers to my generation.

Most of us grew up with – or at least encountered – video games.

We were taught very young to treat the screen like a plate of tapas to pick and choose from, rather than as a sacred text.

Perhaps even those who missed games (many young girls, in the early days, for instance) were still trained to have a roving eye by the frenetic activity of Saturday morning cartoons, or by the visually didactic offerings of Play School and Sesame Street.

Older people grew up on books, and watched movies at the cinema that were first staged like theatre productions. Their hands were held by the film’s creators through the viewing. Though they couldn’t articulate it, they mostly knew what to expect next (what shot, what reaction shot, what panning shot, and so on).

Whereas we were taught to take what we needed from a screen. Webpages, when they came, were no big leap.

Of course we were also young, inquisitive, and took pleasure in being adaptable – qualities that do seem to wane.

In any event, reading webpages has diddly-squat to do with understanding hypertext or TCP/IP.

Similarly, many of our parents well understood what a video tape was capable of doing.

The reason they struggled to program their VCRs was because they grew up in a world of wooden horses and plastic cars and just one fancy piece of electronics in the corner of the living room that at first they weren’t allowed to touch.

Are you already a luddite?

If you’re in your mid-to-late 40s and you believe your grandkids won’t be helping you with your household appliances in 30 years, ponder the following:

  • Do you spend fewer than 10 hours a day consuming content via a handheld digital device?
  • Do you still own a CD or DVD collection, or even an iTunes library?
  • Do you take a photo of every meal you have in a restaurant and then distribute it on social media?
  • Do you ever answer your front doorbell?
  • Do you take 546 portraits of yourself in front of every cultural landmark you pass, and know which is your good side, the right angle for your chin, and what’s your best filter?
  • Are you innately au fait with the rule of three?
  • When was the last time one of your memes went viral?
  • Do you answer your phone and/or leave voice messages?
  • (You actually have a landline?!)
  • Did you meet your last three partners on dating apps?
  • Has your Facebook account been dormant since 2016?
  • How often do you Snapchat something you’re ashamed of?
  • Do you fall asleep with your iPhone?
  • Can you even imagine sitting in front of adverts on the TV?

Sometimes you should be answering yes, and sometimes no.

Hopefully the questions speak for themselves. Most of us my age are already past it.

And this is just 2010-2020 technology. If you’re under 30 and you’re thinking “sure”, wait until you see what’s coming next…

The future is child’s play

My point is that what defines technological advances, eras, generations – and alienation – is not how the technology works.

It’s what people do with it.

A clue my friend is going to be metaphorically reaching into the befuddled darkness in his old age with the rest of us is he thinks none of the stuff in that list is new, and that it’s mostly stupid.

He doesn’t use Snapchat, he says, because he hasn’t got time, but anyway it’s just text messaging with pictures.

Posting photos of every dinner to Instagram is pointless, narcissistic, and distracting.

And so on.

Yes, perhaps I agree with him – but I would because I’m his age.

Our parents thought Manic Miner was a waste of time, too.

My father – who worked in Information Technology all his life – said I was in too much of a hurry to encourage him to get a home email address. Who was ever going to email him at home?

Whereas young people play with the new technology around them.

It’s not even new technology when you’re young. It’s just the world.

Their play may seem silly at first. But often they’re learning how to navigate the future.

Photographing your dinner seems ridiculous to those of us who made it to 46 without a daily visual record of what we ate.

But we weren’t cultivating multi-faceted media personalities from our pre-teen years with as large a footprint online as off.

I sent my friend a video this morning. It shows kids having fun using their AirPods as covert walkie-talkies in class:

My friend replied as follows:

A typically convoluted and wasteful solution. I’m sure they have great fun doing it, though.

(To get his tone, read his second sentence in the sarcastic voice of Basil Fawlty, rather than with the camaraderie of a Blue Peter presenter.)

His response illustrates why my friend will surely have to call out the droid training man six times before it’s finally packing away the grocery deliveries the way he wants it to.

Or why he’ll be one of the last to order an autonomous car that has a hot tub instead of a driver’s seat.

Or why he’ll never meet a partner on Tinder who will only make love after micro-dosing LSD.

Or why he’ll insist on sending text messages, rather than sharing head-vibes via an embedded emote transmitter-receiver.

Or why he’ll die of a heart attack because he hadn’t tracked his blood via a wearable monitor disguised as a signet ring.

Or whatever actually does come down the road; the challenges of tomorrow’s technology won’t look like those of today.

I don’t mean to make fun of my friend. I applaud his aspiration.

But he has got a solution for a totally different problem.

Categories
Technology

Would you rather be killed by a robot?

Few of us want to die, but we have a greater aversion to going one way than another.

A classic example is air travel. Despite being statistically far safer than driving, many more people are afraid of flying and it is air plane crashes that make the nightly news.

Of course the safety of air travel is what makes a rare calamity headline-worthy. Just another car crash caused by a sleepy, drunk, or texting driver will be lucky to make the local papers.

But there’s also something else going on.

Drivers – and perhaps even passengers – seem to accept their agency in a highway accident in a way that many airplane travellers do not. We feel helpless at 35,000 feet, but we suspend our disbelief. We’re equally helpless at 80mph on the motorway should a lorry jack-knife in front of us, but a few seconds before we felt like we were the kings of the road.

The difficulty in making an all-terrain level 5 autonomous car that’s fit for purpose has curbed the enthusiasm of those of who thought (/hoped!) we were on the edge of a self-driving revolution.

But the squishy calculus that we apply to fatal accidents could hold back a rollout even if sufficiently viable technology comes along.

Do Androids dream of L-plates?

What’s sufficient?

In the US, the National Highway Traffic Administration estimated that 36,750 people were killed in US traffic crashes in 2018.

If the entire fleet had switched over to autonomous vehicles on January 1 2019 and the number of deaths had subsequently dropped by one to 36,749 would it be celebrated as a success?

Unlikely – although the one extra person who lived to read that statistic at the end of the year might disagree.

Even leaving aside the many complicating factors absent from this illustration (noise in the statistical data, unintended effects such as greater alcoholism as everyone could now drink and ‘drive’, an increase in drug use or suicide among newly-unemployed truck drivers) we intuitively know the US public isn’t going to accept 36,749 robot-mediated driving deaths a year.

I don’t think the American public would accept 749 annual fatalities from buggy robot driving.

Perhaps not even 49.

Obviously this makes no logical sense, but there it is.

These questions will only amplify as AI migrates further off the desktop and the cloud and visibly into our daily lives.

  • Would you be happy if a robot lifeguard saved three elderly swimmers in difficulty over your six-year old child?
  • Would you chalk it up to statistically predictable bad luck if an AI screen for breast cancer gave you a false negative, even if you’d stood a lower chance of such an erroneous result than had a friendly-faced radiologist seen the same slide?
  • Should a robot driver head towards a fatal collision with an oncoming out-of-control vehicle, killing several, or instead swerve to crush a baby in a pram?

That last example is regularly trotted out in the insurance industry, where such issues aren’t just interesting after-dinner talking points.

Someone will have to be responsible if someone is going to pay.

But for most of us, the questions are mushier. We recoil at their asking, but the machines need to know what to do.

One option is to avoid AI, even if doing so leads to worse outcomes and perhaps tens of thousands of preventable deaths.

Another is to live in a world where we come to accept the odd random destruction or death from poor or faulty or simply inexplicable AI decisions in the same way ancient people sighed and chalked up heart attacks or plagues as evidence of the whims of capricious gods.

That sounds defeatist. But it’s arguably better than 36,750 Americans dying every year at the hands of human drivers because nobody wants to be killed by a bug.

Categories
Technology

Instagram: On the node

A candid photo exposes the reality behind so many aspirational Instagram photographs set in impossibly beautiful locations:

Norway’s Trolltunga: The Instagram myth
Trolltunga: The reality

CNBC notes:

A decade ago, fewer than 800 people a year traveled to Trolltunga. Next year, that figure’s expected to hit 100,000.

People queue with dozens of others, babbling and checking their phones, to be photographed standing at Trolltunga in meditation.

What’s going on?

The 1990’s interpretation: Cheap air travel and a generation that puts more of a premium on experiences than stuff are seeking out the world’s greatest places.

The 2000’s interpretation: The explosion of information on the Internet and the ubiquity of smartphones has made people more aware of where they can visit and what sort of experiences they should pursue.

The 2019 reality: Instagram has made every place a de facto node on a real-world physical network. Social media influencers and network effects drive superlinear traffic to the most popular nodes, which only increases their subsequent popularity.

Instagram will eat the world

Twenty years ago, the National Geographic could publish a photo of Trolltunga to the fleeting interest of a magazine browser. One or two might add Norway to their holiday lists.

Today’s aspirational Instagram user identifies Trolltunga as a resource. In consuming that resource – by visiting, photographing, and posting – they make a honey trap that attracts 100 more.

Hence the most popular spots are noded and overrun, and this kind of mathematics implies they’ll be impossible within an iteration or two.

Solutions?

  • Restricted access to the most popular nodes (quotas, dollars)
  • A counter-cultural trend towards more obscure nodes (at best a delaying tactic)
  • Simulcra nodes. A fake Grand Canyon. A 3D printed Taj Mahal. Machu Picchu remade for middle-class China to visit by train.
  • The Instagram craze dies down (unlikely)
  • Eventually we all live in the matrix, anyway

Many of these solutions sound phoney.

Are they phonier than the myth of Trolltunga today?