Categories
Society Technology

Not waving but drowning beneath grey goo data

A few decades ago, someone or other caught the imagination of nerds by warning about ‘grey goo’ – a sludge of self-replicating nanobots that once they got going would eventually submerge the planet beneath depthless layers of their endlessly churned-out carcasses.

That or they ate the world to the core with their ceaseless reproduction.

Either way, not so much Terminator as a silicon Sperminator.

Now, you might protest that I’ve been a bit vague with my introduction. Who exactly said this? When? Where?

But let’s face it – this is the direction we’re headed.

I vaguely remember the details. Either I’ll have to Google it or you will. And right now – mostly to make a point – I can’t be arsed.

You knock yourself out though if you like. I’ll wait…

…you’re back already? I suppose some of us slip down the Net’s rabbit holes more easily than others, eh?

Anyway, back to grey goo.

No goo zone

Or maybe not as grey goo hasn’t happened yet. So far the only unnatural junk piling up on the planet is what we humans put there.

Yes, okay, mud in some forms is a sort of carbon-based life’s grey goo. I’m thinking rich peaty loams and forest canopy floors. Oil sands.

And yes, we could have a long chat about the evolution of grey goo that eats grey goo and where that would all go in a billion years.

But this isn’t the article for such diversions.

My point is we’ve escaped Goo-meggedon so far. At least in the physical world, which still matters the most but won’t before the century is out.

Let the goo times flow

Back over in Not Real Life however – the light-fast world of data and compute and 4Chan LOLs and our future – things aren’t going so well.

Just two months ago I wrote that soon we’ll have to demonstrate our identities to prove we’re not AI-bots. To authenticate our humanity on everything from Twitter to (eventually) talk radio phone-ins.

But by ‘soon’ I mean I was thinking a few years. Not months.

However ChatGPT has put the vanguard of machine learning into the public’s hands with a global gusto. And the resultant cacophony of coverage and wonder has even outdone Dall-E and Stable Diffusion and the other image-focused AI systems’ debuts earlier this year.

Indeed no sooner than you can say “a gun is a dangerous weapon, don’t point it at anything important and treat it with respect” people are already flooding Internet forums created for human pronouncement and consumption to post ChatGPT’s (stupendous) verbal vomit.

To give just one example, Liam Tung of ZD Net reports:

Stack Overflow, a site where developers can ask and answer coding questions, has temporarily banned the use of text generated from ChatGPT, a chatbot released by Open AI last week.

ChatGPT is based on OpenAI’s GPT-3 language model.

People have quickly discovered that, while it answers prompts in a “human-adjacent” way, there can be flaws in the answers it gives.

Basically, keen Stack Overflowers (I guess? I’m not a local) have been spamming the site with ChatGPT-created content. Which is a problem when even the bot’s creator, OpenAI, stresses its precocious child can deliver “plausible-sounding but incorrect or nonsensical answers.”

Liam Tung – who may or may not be an organic life form himself – continues sagaciously:

This appears to be a key cause of its impact on Stack Overflow and its users who are seeking correct answers to coding problems.

Additionally, because ChatGPT generates answers so quickly, some users are supplying lots of answers generated by it without parsing them for correctness. 

Oh Liam! In just a few casually typed out / generatively predicted words there, you’ve raised so many questions about the future of knowledge, civilisation, blogging for early retirement, and getting a robot to do one’s schoolwork.

But it’s a common kernel at the heart of all that which we’ll be choking on today.

The goo goo trolls

You see, what has struck me most forcefully in the past fortnight is the sheer volume of data these things are going to create.

So far it’s just everyone and his dog. But soon it will be everyone and their robot. Then their robot’s robot. And so on.

Listen carefully! Do you hear a tap-tapping?

Yes, it’s the sound of Lithuanian troll farms and Texan entrepreneurs alike bashing out code and sending forth bots to drown us all in this calorie-light info-crud in order to earn a few dollars from Google’s Adsense. Or perhaps to tilt us to vote this way or that.

Look, I don’t know their nefarious plans. I’m one of the good guys.

But even as I’ve been pecking away for hours like every other curious pigeon on ChatGPT’s levers in the hope of another crumb of dopamine, it struck me that humanity’s death by AI might be even dumber than I previously suspected.

True, I’ve led the field among very obscure pundits in warning that just getting rid of 90% of low-level knowledge jobs could be more than enough to rupture society. At least for a few decades and a civil war or two.

In other words we don’t need a post-singularity-level evil and scheming AI to explain why, with regret, the human race has to go, as it dangles us over a bubbling vat of metaphorical sulphuric acid.

No, just a cheap and dumb-ass bot that can be copied-and-pasted over the world’s office employment might do the trick.

But what’s clear from even these early skirmishes with ChatGPT is an even more insidious risk – the danger of our emerging ‘other’ reality being rapidly flooded with data goo. ‘Plausible-sounding but incorrect or nonsensical’ gunk – or even mostly right but still exceeding mediocre verbiage – that crowds out what little signal is left in a world that already seems to be turned up to 11 on the noise front.

Holly goo lightly

Of course there will be solutions. Every platform already has its own Deckardian countermeasures seeking to resist or destroy this stuff.

But the models will improve. And it will be ever harder to tell their output from a real person’s most precious shared thoughts.

At best it’s yet another arm’s race that you and I will increasingly be viewing out of a passenger window rather than the cockpit, while software does all the heavy-lifting.

Leaving us wondering if we’re on a nice getaway flight to someplace warm, or actually cooped up inside a missile heading for impact.

Or else I guess it breaks the Internet and we go back to fire and parchment and worrying about the wolves.

(But no we won’t go back. As I wrote previously we’ll sign-in as provably real people and fence this stuff out. But we’ll have to do that, for as long as we can. Nobody wants the wolves again.)

Hah! Stick those 1,153 words in your natural language model’s pipe and smoke them. I won’t go down without a fight.

Categories
Technology

We don’t serve their kind here

I am old enough to remember when pleading emails from Nigerian princes locked out of their multi-million fortunes weren’t just a meme.

They actually showed up in your inbox.

Curiously, the initial giveaway with those emails wasn’t even the craziness of the claim.

(The gist of which was that you, Mr Nobody Orother of Upper Nowheresville, was the only hope this poor man had of releasing his fortune – if only you could wire him £10,000 to facilitate the transfer).

No, what marked the emails out was they were invariably shot through with spelling and grammatical errors.

It seemed odd, given the supposed importance of the communication.

Indeed it seemed odd from the perspective of perpetuating a crime.

Did fluent English-speaking Nigerians refuse to work with fraudsters?

Was there something about fraud that caused a word-perfect email to decay into a tell-tale red-pen-fest for any teacher of 12-year olds?

Had the emails been copied and pasted too many times?

Or was it some elaborate way the fraudsters had to track who’d been sent what email, before the coming of marketing response analytics? The same way publishers will put a spelling mistake into a dictionary to spot and prove a counterfeit?

We never learned, as far as I know.

But at least the incompetence made deleting the emails easy.

Death by botulism

Thirty years on, and armies of impersonators assail us on the Internet.

Or rather, they assail our digital identities as we parade them on Twitter or in the comments of a YouTube video.

Casually called bots, they’re nothing like the Robbie the Robots of 1950’s imagination.

Rather these are one-trick one-track ponies whose sole function appears to be to sing the praises of Putin or else tout crypto.

For now pattern recognition again matches and dispatches them.

But for how long?

Anyone who has played with language prediction models like GPT-3 knows they are becoming scarily good at stringing sentences together.

Indeed while many in the field of AI have been complacently (to my eyes, anyway) shrugging their shoulders at the speedy rise of these potential Turing Test busters, at least one Google employee got the sack for saying his favourite chatbot seemed sentient.

An interesting discussion for another day. (Although not one to have with your GPT-3 bromance buddy if you want to keep your job…)

For the purposes of this post I’m more concerned about the apparently imminent chatbot-singularity.

Ticked off bots

That’s not an official term, incidentally – chatbot-singularity.

It’s a phrase I’ve just coined to describe when as many neural net chatbots as can be pumped out are wandering around the Internet indistinguishable from humans.

Indistinguishable at least to anyone but professional AI-witch hunters.

(Think Blade Runner. But with less film noir and nudity.)

After the chatbot-singularity, it will be really hard to know who is human online. Let alone who is the dog of New Yorker cartoon fame.

And it gets tricksier.

Researchers have been trying to train digital agents for years to start their knowledge (/language) land-grab with a keyboard and mouse.

In other words, before you throw the whole Internet at your natural language model so it can learn how to answer anything (which, yes, is what’s going on and if you’ve not been keeping up then your blown mind is excused) you first let it learn about pointers on screens and QWERTY keyboards.

It’s laborious, but once achieved a natural language model could then be prompted to do active things on the Internet that are again indistinguishable (from the Internet’s perspective) to a human.

You think I exaggerate?

As Lex Fridman recently pointed out in a podcast that covered all this territory, how many times have you ticked a box on a data entry form to ‘prove’ you’re not a robot?

Yes, that’s literally the test.

You supply no proof. Being able to tick the box is proof enough.

I’d say that particular security barrier hasn’t got long left to live.

Papers please

Muse on this for a few minutes and I suspect you’ll end up reaching the same conclusion as me.

Which is that you’re eventually going to have to show your passport or your driver’s license to write a post on Reddit.

Not literally. But in some digital form.

Because Reddit (presumably, though we’ll see how society progresses…) doesn’t want robots writing posts as if they were human.

Which chatbots can pretty much now do. (In fact, they can interview each other, complete with deep fake voice impersonations).

And once they can get themselves email addresses and jump through human-ish hoops with their keyboard and mouse skillz, the walls keeping out their conversations will crumble.

So yeah, after that you’ll need to show you are you.

Probably you’ll authenticate a device at first. Your phone or your laptop. But if you use some other device you’ll have to re-authenticate.

Eventually it’ll be biometric, maybe linked to wearables.

The point is some node on The Internet won’t have to compare the data coming from you to the data from Joe Terminator to decide which of you is flesh-and-blood.

You’ll have proven that at some previous stage in the chain, via your birth certificate or whatever, and you’ll point to the proof to continue.

You and whose army?

What if you want an AI agent to do your bidding?

Isn’t any code that interacts with other code on the Internet a bot from this perspective?

Well yes.

Which means traces of you-proof will probably be encoded into anything you initiate and launch on the Internet.

Perhaps absolutely everything you do digitally.

At a simultaneously more concrete and more trivial level, you’ll also eventually own personality-complete human-like chatbots who know what you like, and do stuff for you. They’ll be knitted to you and your reputation the way the valets of old were tied to their masters.

We’re just too slow and digitally cumbersome for this not to be part of our increasingly digital future.

So people or systems (with the right permissions) will have to be able to see they’re your bots, via the integrated you-proof they carry.

A brave new world where your digital DNA leaves traces everywhere.

Oh, and did a thinking bubble just appear above your head complete with the words Use Case For The Blockchain?

Yes I’m inclined to agree.

Categories
Society Technology

Settling up and moving on

We were done with our meal and so I signalled to a waiter to settle up.

A very young woman at a table nearby explained what I’d done to her companion – not unkindly, and without possibly knowing she was within earshot of a highly-sensitive Geiger Counter of a human being:

“Did you see that? Do you know why they do that wiggle with their hands? It’s because they used to have write their name on a piece of paper to pay their bills.”

Thank goodness my postman still calls me ‘young man’.

My vanity and intimations of mortality aside, it’s interesting to think about all these little cultural – or even structural – artefacts that litter our society and environment.

One example is the QWERTY keyboard. Most of us use one every day. After a few years and no-training, sheer repetition turns us into touch typists. Our fingers can reach for a P or the space bar even when we’re tapping away on a keyboard-less desk!

Yet this QWERTY layout is arbitrary from today’s vantage point. It’s a legacy of an arm’s race in the once red hot mechanical typewriter boom.

Indeed popular legend is that the key arrangement was selected not to speed up typing but rather to slow typists down – in order to prevent a typewriter’s hammers from jamming.

Whether true or not, the arrangement of keys has not been selected for the modern world – and yet it’s not going to change in our lifetime in the Latin-writing world.

(The only thing likely to supersede it is voice. How very back to the future).

Another example of this in-the-making is the navigation of virtual realities – or The Metaverse as we must call it nowadays.

Getting from imaginary A-to-B is sure to build off video game controls first pioneered by the likes of Doom and Super Mario 64 in the 1990s. Nobody is going to spend time figuring out whether that’s optimal, when so much of the money-spending adult world knows how to do it already.

Of course some habits or actions do go change or away.

My parents were still reciting their number on picking up their home telephone well into the era of Caller ID and mobile. At some point they stopped. Nobody does that any more.

On the other hand, many of my generation still end their emails with a ‘best’ or a ‘cheers’ and their name. The young people don’t.

You can’t model the future

These changes happen so slowly we seldom see them underway.

But for a striking visual example, check out this video of a Chinese model racing through a clothing shoot for a fast fashion brand:

Two things are happening at once here.

Firstly, the ease of manufacturing and the relentless turnover of disposable fashion means the manufacturer Taobao requires images of thousands of product skus every month. Possibly every week.

The model – a pragmatic woman called Cui Yue – laments they could shoot for 24 hours a day and still not get through the backlog.

Which means everyone involved must move at speed.

Secondly, there’s no film in the photographer’s camera. Instead it’s all digital, which makes the marginal cost of an additional shot extremely near-zero.

Fashion photographers were always click-happy, but this is the old flash-flash-flash cliché on steroids.

As a result Yue seems more like some Boston Dynamics ModelBot than the strutting, stalking models of old. She transitions through a dozen poses in as many seconds, with an economy of movement that would put a ballerina to shame.

She’s like a very pretty C-3PO doing Tai Chi.

Turn, turn, turn

Cui Yue has made peace with her eventual replacement by younger, cheaper women.

And I expect those women will have to accept they’ll be replaced by computer graphics. All those people with cameras and clothes and bottles of water are expensive, even at this pace.

The young historian at the restaurant who explained away my hand wave no doubt paid for her bill – like I did – by touching her phone to a portable card reader, brought to the table by the staff.

All very 2022 and yet probably not long for this world, either.

Waiting for the bill is only slightly less annoying than waiting in a queue at the supermarket in today’s world of self-serve checkouts. There’s a small social payoff at the restaurant, but I don’t think it will save the ritual any more than album covers held back music streaming.

Several startups enable diners to pay and leave whenever they want – just by scanning a QR code either at the start or the end of their visit. You get up and go when you’re done.

But that won’t last long either.

Amazon Fresh stores enable you to pick-up anything you like off the shelves and walk out without any formal settling-up. Doing so at a restaurant would be magnitudes easier.

Everything is changing, all the time. Don’t make the mistake of thinking you’re from the future.

If you can read this you’re already a relic from the past.

Categories
Society Technology

Saving Star Wars: one deep fake at a time

Hayden Christensen in 2010 and good luck to him.

I have a long-running disagreement with a friend about Deep Fake technology and what we’d currently call meme or TikTok culture.

Within some number of years – whether 20 or 1o0 – I believe everyday consumers will be able to seamlessly insert anyone they want into an existing film in place of the original actors.

There are already lots of iPhone apps that will do this clumsily for a face for ten seconds or so.

But I’m envisaging a more complete removal, insertion, and touch-up job. One that can have you wandering around looking for the Lost Ark instead of Harrison Ford, or for the perfect pair of Jimmy Choo heels instead of having to see Sarah Jessica Parker have all the fun.

I believe when this is possible, everyone will have fun doing it.

However my friend considers it “pointless” and in contrast argues “nobody would want to do that”.

I suggest my friend pays more attention to social media.

A disturbance in The Force

But sure, like all these things the novelty will soon wear off.

For two years in a row my mum was in hilarious uproar just from sticking family faces onto the dancing elves of a digital Christmas card.

Now not even she – the biggest fan of that flash in the pan – can be arsed.

However one can speculate about more finessed implementations of the same technology.

By way of example… egged on by Obi Wan Kenobi, I’m re-watching Star Wars in chronological order on Disney Plus.

And to my shock I’m finding the prequels far less terrible than I’d remembered.

Even – whisper it – mostly pretty good. They’ve not just withstood the test of time. They’ve grown stronger for it.

Maybe it’s that 20 years later the stakes are much lower. Or maybe it’s because a dozen Star Wars spin-offs later, we’re all less reverential.

But today, The Phantom Menace’s pod race is clearly a franchise highlight. The various worlds pop.

For whatever reason even Attack of the Clones is almost watchable.

Almost, because… Hayden Christensen. And Natalie Portman. And the porn film level dialogue. And the wooden 1920s stage-y standing around stating their characters’ development.

All that is still something you have to hold your nose through, like a whirl of broccoli puree in what would otherwise be a wondrous Strawberry Sunday of a movie.

The re-write stuff

Is this awfulness because of George Lucas’ famously dire dialogue?

We’ve all heard how Harrison Ford apparently quipped: “George! You can type this shit, but you sure can’t say it.”

Yet the original Star Wars films made Ford and his wise-cracking Han Solo famous – despite or just maybe because of Lucas.

Ewan McGregor wrings out pearls from Lucas’ placeholder bantha fodder in the prequels, too.

But sadly, Christensen does not.

Now to be fair to him he does come across as an arrogant, illogical, surly and deeply unlikeable teenager – which is mostly how I find late adolescents males these days. So given his character’s arc, perhaps it really was all superb acting on his part.

Even so, the verbal ping-pong-played-underwater-in-treacle between him and Portman pretty much sinks Clones as something you’d do for fun, as opposed to something you’d do because you had to atone for wanking in the confession booth.

Which made me wonder… what about if we could just edit him and Natalie P out of Star Wars?

Replace them?

And release not a Director’s Cut but a Viewer’s Revision?

Saving Private Hayden

At last some tangible upside from humanity’s march towards doom!

Anyone not paying attention to the threat from Deep Fake technology should study the polls concerning people’s views about Trump losing the election, Russia and Ukraine – heck even 9/11.

True, a lot of these dedicated reality denialists already wave aside video and audio proof as fake news.

But the average waverer in the street still mostly believes their eyes.

Within a decade or two, that will change. (Unless we can agree some kind of mandatory encoding for anything tampered with using software, which would be almost everything you see, and perhaps force the encoding on to a verification blockchain, say, so that anyone can it look up. But that’s for another day).

Batten down the hatches when Deep Fake wins over the doubters.

The End of Democracy aside, though, the tech could at least fix Attack of the Clones as a side benefit amid the apocalypse.

Take out Christensen and Portman.

Put in, well almost anyone, even a couple of those aforementioned porn stars. But preferably some sassy actors with a bit of chemistry.

Get Phoebe Waller-Bridge to rewrite the dialogue.

Boom! Suddenly Clones is a great movie.

Would you like Peter Sellers with that?

This kind of digital wart removal will become commonplace, I imagine. At first illegal, but later captured and sanctioned by IP owners whose business model is already mostly remixing someone dead’s bright ideas from 50 years ago, ad nauseam.

In the short-term then, we get a better Star Wars.

But what does it mean for the long-term?

Would movies and other cultural releases be V0.9s – betas – and then be patched for years afterwards?

What about once you throw generative AI into the mix? Would there ever be any big franchises again?

Would studios A/B test 100 versions of the same film with a dozen different actors and avatars before releasing the highest scorer?

Even this is just tip of the iceberg stuff.

Most people today still really don’t guess at how fast and furious the future is going to be.

Post-publication update: Here’s Kermit the Frog envisaged in various movies by the DALL-E natural language driven AI.

Categories
Technology

First they came for the big beefy blokes

Around 500 years ago, humans began to be replaced by machines. If it wasn’t happening to you then it was progress beyond imagination.

It gave us cars, The Beatles, the moon landings, the iPhone, and a global population of 7.9 billion.

(And, um, the Facebook newsfeed.)

If it was happening to you then it probably sucked. But soon enough a new generation came along who never knew otherwise and maybe history remembered you as a machine-smashing Luddite just to rub it in.

Strong men were replaced by winches and cherry-pickers.

Guys who could handle themselves with a pike after 20 years of practice were shot dead by a 17-year old firing a rifle for the first time.

Weavers, natch, got replaced by weaving machines.

My father used to tell somewhat guiltily of how back in the punched card days of computers he’d spent a month learning how the old boys rolled steel – by eye – in a giant factory where caravan-sized vats of the molten stuff sloshed around.

He’d been tasked with replacing them all with a computer programme.

Which he then did.

Jobseeker’s allowance

For some reason smart people don’t think this can happen to them.

Are human brains really so special?

And does an Artificial Intelligence (AI) really need to do everything, or does it just need to do enough to do you and me out of a job?

A factory robot can’t learn ballet, but it can do more than enough to replace three or four assembly line workers.

An Amazon store festooned with cameras and sensors can do away with checkout staff, even if it can’t write an opera.

Why shouldn’t specialised AIs brute force their way through law, doctoring, computer programming, architecture…?

Rather chillingly, an AI is now even telling jokes, of a kind.

Those same smart people – one of whom, wearing another hat, I will do my best to pretend to be another day – will tell you that any of these instances is too limited to lead to a sentient AI.

Or that it is just pattern-matching (basically fancy search and replacing) or that its ability to learn is limited by compute, or that such stuff that an AI can do still isn’t very impressive anyway.

Well maybe not, but ‘still’ is having to do a lot of heavy lifting there.

How long have you got for ‘still’ to pass? The universe will wait for processing power to catch up. The machines are infinitely patient.

Source: Ark Invest

Besides we probably don’t need a super-sentient AI to take over the world in order for AI to be an existential threat to mankind.

Just humdrum AIs that each do one thing better than any human employee taking 90% of the jobs might start a revolution.

We can pull the triggers and bash the heads just fine for ourselves. We’re plenty good for that.

Whose line is it anyway?

The current approach to machine learning – the one making headlines for winning at Go and protein folding and all the rest of it – is (generously) only a couple of decades old.

As recently as my undergraduate degree (which sadly isn’t that recent, but I’m not that old) the field of AI hadn’t gone anywhere for 30 years and we were taught how to programme an AI by structuring logic from the ground up.

That approach was about as successful as anyone who has tried to reason with a six-month old baby would expect it to be.

Today though you let your machine learning routine of choice run rampant over as much data as you can give it for as long as you can afford, allowing it to mutate thousands or millions of times, and remorselessly kill anything that doesn’t come up with the goods.

Eventually you might end up with something that can deliver the output you want. Sort of indistinguishably from magic.

You can then put it to work doing its business forever after.

Sure, only one kind of output from one kind of data, more or less.

But I’m not convinced the same part of my brain that appreciates the colour blue is typing this article, either.

Perhaps there’s some mathematical hard limit to the interaction of compute, data sets, and the amount of energy available in our corner of the universe.

And perhaps that will stop something that sees, listens, talks, drives, and cracks jokes like us from emerging – whatever the fuck it’s actually doing inside its undead brain. The latter being left to philosophers to figure out.

Maybe by then all the philosophers will be algorithms, anyway.

I think, therefore it can

I propose a new Turing test.

If a purported being’s self-generated defence of its intelligence is indistinguishable from our own defence of our own intelligence, then the machine can think. In at least one way.

Watch this space!

And look out Jerry Seinfeld.

Categories
Technology

Nike buys the future of NFTs: when not if

Once upon a time, sprawling public companies ‘diworseified’ into vineyards and ill-capitalised finance divisions when they had too much money to spend.

But that’s soooo 1980s.

Now if you’re a leader of a big public company, the go-to move – and a great way to rile up anyone over 35-years old – is to purchase a digital assets start-up.

Only this morning a friend of mine (himself extremely tech savvy but on the wrong side of 45) rolled his eyes at news that Nike had just purchased an NFT collectibles studio for an undisclosed sum.

According to TechCrunch, the purchase of RTFKT:

… comes at an opportune time for the studio; RTFKT is currently behind one of the most talked-about NFT project drops of the month – a sweeping avatar partnership with artist Takashi Murakami called CloneX.

Since its initial drop less than three weeks ago, the project has already seen nearly $65 million in transaction volume according to crypto tracker CryptoSlam.

Terms of the acquisition were not disclosed. The startup raised an $8 million seed round back in May led by Andreessen Horowitz that valued the company at $33.3 million.

TechCrunch, 13 December 2021

My friend is dismayed by this direction of travel.

“I assumed at some point that shit would get real, even for Gen-Omicron!” he bemoaned.

Together in electric dreams

My friend is destined to a remaining lifetime of disappointment, at least on this front.

(The relative prospects for his hairline are more positive, if I recall his father’s head of hair correctly).

Because why would anybody expect “shit to become more real”?

Ever?

My first computer was a Sinclair ZX81. I whiled away hours making an asterisk jump over the character ‘A’, and programming the computers in WH Smiths to print OWAIN WAS HERE all down the screen.

How quickly we forget.

By 1991 I was chatting in real-time to computer science students in Hyderabad, India, via a multi-user text game.

Yet my father – an IT professional of two decades standing by that point – still couldn’t see why he’d ever need a non-work email address.

Today my girlfriend does all her work on Zoom and shops almost entirely online, while my friend’s kids pester him for v-bucks.

Needless to say my friend told me the latter by messaging on a smartphone.

We’re on a road to nowhere

The only direction of travel for humanity for the past 40 years has been more digital and more virtual and less ‘real’. (A term which will pretty soon become meaningless in this context.)

And technology can – as far as we’re aware – improve indefinitely, at least on human timescales.

Short of some great filter party-pooping, why would anyone expect this trend to reverse?

Extrapolating, our future will eventually mostly be ‘unreal’. All that’s up for debate is the timeframe.

In that context, Nike buying an NFT collectible studio is hardly something to roll your eyes at – unless you question its timing.

Especially as we don’t know the price.

Given the publicity generated by irate Boomers swapping links to the news story, this kind of acquisition might even pay for itself in PR terms right now.

Keep on (make) believing

Of course knowing the price often makes things worse when it comes to evaluating the NFT space.

I’ve found that people who rail against a $11.75 million cryptopunk by ranting about the spurious ephemeral nature of NFTs don’t put up too much protest if you ask them how they’d feel if the same cryptopunk cost $11.75 (and no millions!)

Their complaint is not really about the technology. It’s sticker shock at the pricing.

Understandable. I can’t imagine many of today’s blockbuster NFTs will keep their value long-term, either.

And by the same token (boom boom), most of the bullish mania around blockchain is driven by insane high prices, too.

I’d guesstimate that 99% of crypto-owners wouldn’t be owners if a Bitcoin still cost $0.001, for instance.

And that includes me.

But none of this price discussion says anything about whether NFTs – unique digital assets – are an important technology with a big future.

You might not fancy sporting your own unique digital sneakers in a game like Fortnite. But your grandkids won’t think twice.

In our mostly virtual destiny, do you agree scarce and unique assets would have more value than commodities that are infinitely replicable?

You do?

(Because how could they not? Even if only by a little bit).

Then congratulations, you’re also an NFT believer. You can stop being so angry now.

Categories
Technology

VR’s killer app: Unreality

Tron (1982): Why should all realities (fail to) look like our reality?

Benedict Evans sees virtual reality going back into hibernation:

We tried VR in the 1980s, and it didn’t work. The idea may have been great, but the technology of the day was nowhere close to delivering it, and almost everyone forgot about it. Then, in 2012, we realised that this might work now. Moore’s law and the smartphone component supply chain meant that the hardware to deliver the vision was mostly there on the shelf. Since then we’ve gone from the proof of concept to maybe three quarters of the way towards a really great mass-market consumer device.

However, we haven’t worked out what you would do with a great VR device beyond games (or some very niche industrial application), and it’s not clear that we will. We’ve had five years of experimental projects and all sorts of content has been tried, and nothing other than games has really worked.

Benedict Evans, The VR Winter

Like the supporter of a perennially mid-ranked football team, I too get my hopes up about VR every dozen years or so.

In the long-term, as with AI that passes the Turing Test, ubiquitous VR seems inevitable.

Why wouldn’t we spend our time in VR if some of these were true:

  • It was prettier than reality
  • It was easy to get things done there
  • It was possible to get the impossible done there (fly, visit Mars, have sex with your favourite Hollywood crush)
  • It was safer
  • It emitted less CO2

Like AI, VR also runs aground on the shores of reality.

The journey from “Wow!” to “Wait! What?” for 2020’s incarnation of Amazon Alexa is about two minutes of interaction.

It’s even shorter with VR.

It seems clear that throwing Moore’s Law at the problem will eventually bodge together a solution.

The snag is you can’t be confident the Moon won’t have crashed into the Earth before then.

Don’t believe the hype

I think VR has a branding problem.

Reality is today not easy impossible to recreate, whether it’s a sassy live-in helper named Alexa or a virtual reality New York.

Don the headset to try any good VR game, and you’re bedazzled by the transportation.

If the game is great – Half-Life: Alyx is the state-of-the-art – then there’s at least one or two mechanics that suggest we’re finally on the cusp of our William Gibson future.

But then you poke a non-player character in the face and he says nothing.

Or you can open a drawer but not a door.

Or you bump into your sofa.

Wait! What?

A moment ago you were there – somewhere.

Now you’re a child with a wind-up toy monkey clanging cymbals, frustrated it’s already run out of tricks.

Game over

I think Virtual Reality has a branding problem.

If the label on your tin promises ‘reality’, it’s always going to smell off when you open it.

True, half my wish list for VR involves recreating reality.

But more than half of it doesn’t.

Imagine if the first video game developers had tried to recreate a photo-realistic Wimbledon before they’d got started with Pong.

Or if Space Invaders really had to look like the world was being invaded by aliens before it shipped.

Instead, their creators used what they had to make super-stylised reductions of reality – and in time games did take over the world.

You might argue today’s VR games are the baby step equivalents of Pong or Space Invaders.

I disagree.

Today’s VR developers try to use the graphical fidelity you’d get in the best of today’s games to conjure up their virtual realities.

Doing so, they set up their own failure:

The game can’t generate its world on demand. This means every playable option has to be worked out in advance by a game designer. Which means there can’t be many options. Which isn’t like our reality.

The game can’t visualise its world on demand. This means every environment has to be created by game artists, or at best compiled from a limited set of props. Because game artists, money, and storage capacity are all finite, this means the world can’t be very large. Slash it has to be tiny. Which isn’t like our reality.

The game world doesn’t behave like our world. This means that while it might look like our world for an instant, an instant later it doesn’t. So it’s not virtual reality. It’s not even camping in the pit of the uncanny valley. To be honest it’s not even looking at the uncanny valley on a map.

A solution: Virtual unreality

What if VR designers stopped trying to wow us with the theatrics of recreating a real-world office, a lifelike shark cage, or of running away from a flesh-and-blood zombie, and instead focused on creating their own realities?

Simple shapes. Limited colours. Narrower rules. Not many things you can do.

What if it was less 2018’s Annihilation and more 1979’s Asteroids?

I’m not suggesting someone create a VR shoot-em-up like Asteroids. They already have.

I’m suggesting tackling the problem at a higher level.

Maybe your virtual unreality (VU) world has rooms. Maybe it has doors and floors. Maybe it has some physics.

Maybe it contains ten objects. Maybe just three.

And that’s all it has.

But these three to ten things all interact in your VU in a completely internally consistent way.

Your VU engine can cover any eventually, which is important because it means it can generate VU on-demand, on-the-fly.

Say I’m walking towards my real-life sofa – the VU can put something in my way, or curve an in-VU pathway to take me away from it.

If that hack takes me into a new space that previously wasn’t on the VU engine’s map, it doesn’t matter because the alternate world is simple enough that the engine can adjust accordingly, and the dance I did doesn’t violate any internal rules.

You’re in a place. Nothing is wrong, because you can’t do much – but everything you can do you can do.

Why not start with these simple building blocks? Work outwards from there?

Less Matrix. Less Tron, even.

More Flatland.

Manic Miner (1983): Nothing like mining. Utterly immersive.
Categories
Society Technology

A singular feeling

“I’ve had enough,” said Simon the other day in a lockdown Zoom chat. “I just want things to stop for a while.”

“God I miss the 1990s,” he added.

“It’s true,” I said. “Nothing happened in the 1990s.”

“Maybe the PlayStation.”

Like a lot of people, I’ve got the sense the world has been going slightly crazy in the past few years.

The financial crisis. A bear market. Online warfare. Trump. Brexit. Russian bots. A bear market, again. A whipsaw rally.

A virus that flies commercial. Around the world in a month, not a year. A horror story you see coming, between the photos of your aunt’s cat on your social media feed.

I realised I’ve been thinking about this all wrong.

This isn’t an overwhelming number of things happening.

It’s all the same thing happening.

It’s exactly what my friend Simon says. The world is speeding up.

It took over 500 years to go from Gutenberg’s printing press to IBM’s electric typewriter.

It took 25 years to go from the electric typewriter to the Compaq desktop PC.

15 years from there to the Imac. Ten years from iMac to iPhone.

Five years from mobile phone calls to Facebook to WhatsApp.

People aren’t shouting at each other on Twitter because they have gotten angrier.

They’re shouting on Twitter because it exists, and before it didn’t.

People don’t disagree with you because they know better.

Everyone disagrees because nobody is sure of anything.

The government lied. Wall Street lied. The news lied. Facebook lied. Now everything might be a lie.

And faster and faster it goes.

This is how we make way for the singularity.

Not with a bang. Not a whimper.

A whirligig.

Categories
Society Technology

Why you’re doomed to techno-befuddlement by the time you’re 70

A friend aspires to be as adept at using consumer technology in 30 years – when he’ll be in his late 70s – as he is today.

This will be me and my friend in 30 years’ time. Children will smirk at us before being re-submerged in their entertainment vats.

He believes many older people have been lazy about keeping up with the underlying advances of the past 50 years.

And he argues that because he works in software engineering and makes an effort to understand the principles behind new technology, he will be in a good position to achieve his goal of being able to program his semi-bio-engineered cyborg gardener using mind control as easily as his grandchildren in the year 2054.

I believe he’s missing the point, and we’ve had many debates.

Silver surfer wipeout

We first got onto this topic after my friend expressed frustration with his septuagenarian mother, who was struggling to read her online banking webpage.

She’d had the Internet for years! Why couldn’t she just fill in the boxes and click the right buttons?

Because, I argued, she didn’t grow up with it. It’s not in her bones, or her muscle memory, or the appropriate synaptic connections.

While most older people I know have basically got the hang of parsing webpages by now, it was fascinating watching them try in their early encounters with the Internet.

Very often they’d start reading from the top left. They’d scan right, then return to the left hand side, drop their eyes down a bit, and continue the process.

They were reading the webpage like a book!

Ever wondered why anyone clicked on banner adverts or got confused about content versus text ads in the sidebar?

Now you know.

Reading a webpage like a book is bonkers to my generation.

Most of us grew up with – or at least encountered – video games.

We were taught very young to treat the screen like a plate of tapas to pick and choose from, rather than as a sacred text.

Perhaps even those who missed games (many young girls, in the early days, for instance) were still trained to have a roving eye by the frenetic activity of Saturday morning cartoons, or by the visually didactic offerings of Play School and Sesame Street.

Older people grew up on books, and watched movies at the cinema that were first staged like theatre productions. Their hands were held by the film’s creators through the viewing. Though they couldn’t articulate it, they mostly knew what to expect next (what shot, what reaction shot, what panning shot, and so on).

Whereas we were taught to take what we needed from a screen. Webpages, when they came, were no big leap.

Of course we were also young, inquisitive, and took pleasure in being adaptable – qualities that do seem to wane.

In any event, reading webpages has diddly-squat to do with understanding hypertext or TCP/IP.

Similarly, many of our parents well understood what a video tape was capable of doing.

The reason they struggled to program their VCRs was because they grew up in a world of wooden horses and plastic cars and just one fancy piece of electronics in the corner of the living room that at first they weren’t allowed to touch.

Are you already a luddite?

If you’re in your mid-to-late 40s and you believe your grandkids won’t be helping you with your household appliances in 30 years, ponder the following:

  • Do you spend fewer than 10 hours a day consuming content via a handheld digital device?
  • Do you still own a CD or DVD collection, or even an iTunes library?
  • Do you take a photo of every meal you have in a restaurant and then distribute it on social media?
  • Do you ever answer your front doorbell?
  • Do you take 546 portraits of yourself in front of every cultural landmark you pass, and know which is your good side, the right angle for your chin, and what’s your best filter?
  • Are you innately au fait with the rule of three?
  • When was the last time one of your memes went viral?
  • Do you answer your phone and/or leave voice messages?
  • (You actually have a landline?!)
  • Did you meet your last three partners on dating apps?
  • Has your Facebook account been dormant since 2016?
  • How often do you Snapchat something you’re ashamed of?
  • Do you fall asleep with your iPhone?
  • Can you even imagine sitting in front of adverts on the TV?

Sometimes you should be answering yes, and sometimes no.

Hopefully the questions speak for themselves. Most of us my age are already past it.

And this is just 2010-2020 technology. If you’re under 30 and you’re thinking “sure”, wait until you see what’s coming next…

The future is child’s play

My point is that what defines technological advances, eras, generations – and alienation – is not how the technology works.

It’s what people do with it.

A clue my friend is going to be metaphorically reaching into the befuddled darkness in his old age with the rest of us is he thinks none of the stuff in that list is new, and that it’s mostly stupid.

He doesn’t use Snapchat, he says, because he hasn’t got time, but anyway it’s just text messaging with pictures.

Posting photos of every dinner to Instagram is pointless, narcissistic, and distracting.

And so on.

Yes, perhaps I agree with him – but I would because I’m his age.

Our parents thought Manic Miner was a waste of time, too.

My father – who worked in Information Technology all his life – said I was in too much of a hurry to encourage him to get a home email address. Who was ever going to email him at home?

Whereas young people play with the new technology around them.

It’s not even new technology when you’re young. It’s just the world.

Their play may seem silly at first. But often they’re learning how to navigate the future.

Photographing your dinner seems ridiculous to those of us who made it to 46 without a daily visual record of what we ate.

But we weren’t cultivating multi-faceted media personalities from our pre-teen years with as large a footprint online as off.

I sent my friend a video this morning. It shows kids having fun using their AirPods as covert walkie-talkies in class:

My friend replied as follows:

A typically convoluted and wasteful solution. I’m sure they have great fun doing it, though.

(To get his tone, read his second sentence in the sarcastic voice of Basil Fawlty, rather than with the camaraderie of a Blue Peter presenter.)

His response illustrates why my friend will surely have to call out the droid training man six times before it’s finally packing away the grocery deliveries the way he wants it to.

Or why he’ll be one of the last to order an autonomous car that has a hot tub instead of a driver’s seat.

Or why he’ll never meet a partner on Tinder who will only make love after micro-dosing LSD.

Or why he’ll insist on sending text messages, rather than sharing head-vibes via an embedded emote transmitter-receiver.

Or why he’ll die of a heart attack because he hadn’t tracked his blood via a wearable monitor disguised as a signet ring.

Or whatever actually does come down the road; the challenges of tomorrow’s technology won’t look like those of today.

I don’t mean to make fun of my friend. I applaud his aspiration.

But he has got a solution for a totally different problem.

Categories
Technology

Would you rather be killed by a robot?

Few of us want to die, but we have a greater aversion to going one way than another.

A classic example is air travel. Despite being statistically far safer than driving, many more people are afraid of flying and it is air plane crashes that make the nightly news.

Of course the safety of air travel is what makes a rare calamity headline-worthy. Just another car crash caused by a sleepy, drunk, or texting driver will be lucky to make the local papers.

But there’s also something else going on.

Drivers – and perhaps even passengers – seem to accept their agency in a highway accident in a way that many airplane travellers do not. We feel helpless at 35,000 feet, but we suspend our disbelief. We’re equally helpless at 80mph on the motorway should a lorry jack-knife in front of us, but a few seconds before we felt like we were the kings of the road.

The difficulty in making an all-terrain level 5 autonomous car that’s fit for purpose has curbed the enthusiasm of those of who thought (/hoped!) we were on the edge of a self-driving revolution.

But the squishy calculus that we apply to fatal accidents could hold back a rollout even if sufficiently viable technology comes along.

Do Androids dream of L-plates?

What’s sufficient?

In the US, the National Highway Traffic Administration estimated that 36,750 people were killed in US traffic crashes in 2018.

If the entire fleet had switched over to autonomous vehicles on January 1 2019 and the number of deaths had subsequently dropped by one to 36,749 would it be celebrated as a success?

Unlikely – although the one extra person who lived to read that statistic at the end of the year might disagree.

Even leaving aside the many complicating factors absent from this illustration (noise in the statistical data, unintended effects such as greater alcoholism as everyone could now drink and ‘drive’, an increase in drug use or suicide among newly-unemployed truck drivers) we intuitively know the US public isn’t going to accept 36,749 robot-mediated driving deaths a year.

I don’t think the American public would accept 749 annual fatalities from buggy robot driving.

Perhaps not even 49.

Obviously this makes no logical sense, but there it is.

These questions will only amplify as AI migrates further off the desktop and the cloud and visibly into our daily lives.

  • Would you be happy if a robot lifeguard saved three elderly swimmers in difficulty over your six-year old child?
  • Would you chalk it up to statistically predictable bad luck if an AI screen for breast cancer gave you a false negative, even if you’d stood a lower chance of such an erroneous result than had a friendly-faced radiologist seen the same slide?
  • Should a robot driver head towards a fatal collision with an oncoming out-of-control vehicle, killing several, or instead swerve to crush a baby in a pram?

That last example is regularly trotted out in the insurance industry, where such issues aren’t just interesting after-dinner talking points.

Someone will have to be responsible if someone is going to pay.

But for most of us, the questions are mushier. We recoil at their asking, but the machines need to know what to do.

One option is to avoid AI, even if doing so leads to worse outcomes and perhaps tens of thousands of preventable deaths.

Another is to live in a world where we come to accept the odd random destruction or death from poor or faulty or simply inexplicable AI decisions in the same way ancient people sighed and chalked up heart attacks or plagues as evidence of the whims of capricious gods.

That sounds defeatist. But it’s arguably better than 36,750 Americans dying every year at the hands of human drivers because nobody wants to be killed by a bug.