Categories
Technology

VR’s killer app: Unreality

Tron (1982): Why should all realities (fail to) look like our reality?

Benedict Evans sees virtual reality going back into hibernation:

We tried VR in the 1980s, and it didn’t work. The idea may have been great, but the technology of the day was nowhere close to delivering it, and almost everyone forgot about it. Then, in 2012, we realised that this might work now. Moore’s law and the smartphone component supply chain meant that the hardware to deliver the vision was mostly there on the shelf. Since then we’ve gone from the proof of concept to maybe three quarters of the way towards a really great mass-market consumer device.

However, we haven’t worked out what you would do with a great VR device beyond games (or some very niche industrial application), and it’s not clear that we will. We’ve had five years of experimental projects and all sorts of content has been tried, and nothing other than games has really worked.

Benedict Evans, The VR Winter

Like the supporter of a perennially mid-ranked football team, I too get my hopes up about VR every dozen years or so.

In the long-term, as with AI that passes the Turing Test, ubiquitous VR seems inevitable.

Why wouldn’t we spend our time in VR if some of these were true:

  • It was prettier than reality
  • It was easy to get things done there
  • It was possible to get the impossible done there (fly, visit Mars, have sex with your favourite Hollywood crush)
  • It was safer
  • It emitted less CO2

Like AI, VR also runs aground on the shores of reality.

The journey from “Wow!” to “Wait! What?” for 2020’s incarnation of Amazon Alexa is about two minutes of interaction.

It’s even shorter with VR.

It seems clear that throwing Moore’s Law at the problem will eventually bodge together a solution.

The snag is you can’t be confident the Moon won’t have crashed into the Earth before then.

Don’t believe the hype

I think VR has a branding problem.

Reality is today not easy impossible to recreate, whether it’s a sassy live-in helper named Alexa or a virtual reality New York.

Don the headset to try any good VR game, and you’re bedazzled by the transportation.

If the game is great – Half-Life: Alyx is the state-of-the-art – then there’s at least one or two mechanics that suggest we’re finally on the cusp of our William Gibson future.

But then you poke a non-player character in the face and he says nothing.

Or you can open a drawer but not a door.

Or you bump into your sofa.

Wait! What?

A moment ago you were there – somewhere.

Now you’re a child with a wind-up toy monkey clanging cymbals, frustrated it’s already run out of tricks.

Game over

I think Virtual Reality has a branding problem.

If the label on your tin promises ‘reality’, it’s always going to smell off when you open it.

True, half my wish list for VR involves recreating reality.

But more than half of it doesn’t.

Imagine if the first video game developers had tried to recreate a photo-realistic Wimbledon before they’d got started with Pong.

Or if Space Invaders really had to look like the world was being invaded by aliens before it shipped.

Instead, their creators used what they had to make super-stylised reductions of reality – and in time games did take over the world.

You might argue today’s VR games are the baby step equivalents of Pong or Space Invaders.

I disagree.

Today’s VR developers try to use the graphical fidelity you’d get in the best of today’s games to conjure up their virtual realities.

Doing so, they set up their own failure:

The game can’t generate its world on demand. This means every playable option has to be worked out in advance by a game designer. Which means there can’t be many options. Which isn’t like our reality.

The game can’t visualise its world on demand. This means every environment has to be created by game artists, or at best compiled from a limited set of props. Because game artists, money, and storage capacity are all finite, this means the world can’t be very large. Slash it has to be tiny. Which isn’t like our reality.

The game world doesn’t behave like our world. This means that while it might look like our world for an instant, an instant later it doesn’t. So it’s not virtual reality. It’s not even camping in the pit of the uncanny valley. To be honest it’s not even looking at the uncanny valley on a map.

A solution: Virtual unreality

What if VR designers stopped trying to wow us with the theatrics of recreating a real-world office, a lifelike shark cage, or of running away from a flesh-and-blood zombie, and instead focused on creating their own realities?

Simple shapes. Limited colours. Narrower rules. Not many things you can do.

What if it was less 2018’s Annihilation and more 1979’s Asteroids?

I’m not suggesting someone create a VR shoot-em-up like Asteroids. They already have.

I’m suggesting tackling the problem at a higher level.

Maybe your virtual unreality (VU) world has rooms. Maybe it has doors and floors. Maybe it has some physics.

Maybe it contains ten objects. Maybe just three.

And that’s all it has.

But these three to ten things all interact in your VU in a completely internally consistent way.

Your VU engine can cover any eventually, which is important because it means it can generate VU on-demand, on-the-fly.

Say I’m walking towards my real-life sofa – the VU can put something in my way, or curve an in-VU pathway to take me away from it.

If that hack takes me into a new space that previously wasn’t on the VU engine’s map, it doesn’t matter because the alternate world is simple enough that the engine can adjust accordingly, and the dance I did doesn’t violate any internal rules.

You’re in a place. Nothing is wrong, because you can’t do much – but everything you can do you can do.

Why not start with these simple building blocks? Work outwards from there?

Less Matrix. Less Tron, even.

More Flatland.

Manic Miner (1983): Nothing like mining. Utterly immersive.
Categories
Society Technology

A singular feeling

“I’ve had enough,” said Simon the other day in a lockdown Zoom chat. “I just want things to stop for a while.”

“God I miss the 1990s,” he added.

“It’s true,” I said. “Nothing happened in the 1990s.”

“Maybe the PlayStation.”

Like a lot of people, I’ve got the sense the world has been going slightly crazy in the past few years.

The financial crisis. A bear market. Online warfare. Trump. Brexit. Russian bots. A bear market, again. A whipsaw rally.

A virus that flies commercial. Around the world in a month, not a year. A horror story you see coming, between the photos of your aunt’s cat on your social media feed.

I realised I’ve been thinking about this all wrong.

This isn’t an overwhelming number of things happening.

It’s all the same thing happening.

It’s exactly what my friend Simon says. The world is speeding up.

It took over 500 years to go from Gutenberg’s printing press to IBM’s electric typewriter.

It took 25 years to go from the electric typewriter to the Compaq desktop PC.

15 years from there to the Imac. Ten years from iMac to iPhone.

Five years from mobile phone calls to Facebook to WhatsApp.

People aren’t shouting at each other on Twitter because they have gotten angrier.

They’re shouting on Twitter because it exists, and before it didn’t.

People don’t disagree with you because they know better.

Everyone disagrees because nobody is sure of anything.

The government lied. Wall Street lied. The news lied. Facebook lied. Now everything might be a lie.

And faster and faster it goes.

This is how we make way for the singularity.

Not with a bang. Not a whimper.

A whirligig.

Categories
Society Technology

Why you’re doomed to techno-befuddlement by the time you’re 70

A friend aspires to be as adept at using consumer technology in 30 years – when he’ll be in his late 70s – as he is today.

This will be me and my friend in 30 years’ time. Children will smirk at us before being re-submerged in their entertainment vats.

He believes many older people have been lazy about keeping up with the underlying advances of the past 50 years.

And he argues that because he works in software engineering and makes an effort to understand the principles behind new technology, he will be in a good position to achieve his goal of being able to program his semi-bio-engineered cyborg gardener using mind control as easily as his grandchildren in the year 2054.

I believe he’s missing the point, and we’ve had many debates.

Silver surfer wipeout

We first got onto this topic after my friend expressed frustration with his septuagenarian mother, who was struggling to read her online banking webpage.

She’d had the Internet for years! Why couldn’t she just fill in the boxes and click the right buttons?

Because, I argued, she didn’t grow up with it. It’s not in her bones, or her muscle memory, or the appropriate synaptic connections.

While most older people I know have basically got the hang of parsing webpages by now, it was fascinating watching them try in their early encounters with the Internet.

Very often they’d start reading from the top left. They’d scan right, then return to the left hand side, drop their eyes down a bit, and continue the process.

They were reading the webpage like a book!

Ever wondered why anyone clicked on banner adverts or got confused about content versus text ads in the sidebar?

Now you know.

Reading a webpage like a book is bonkers to my generation.

Most of us grew up with – or at least encountered – video games.

We were taught very young to treat the screen like a plate of tapas to pick and choose from, rather than as a sacred text.

Perhaps even those who missed games (many young girls, in the early days, for instance) were still trained to have a roving eye by the frenetic activity of Saturday morning cartoons, or by the visually didactic offerings of Play School and Sesame Street.

Older people grew up on books, and watched movies at the cinema that were first staged like theatre productions. Their hands were held by the film’s creators through the viewing. Though they couldn’t articulate it, they mostly knew what to expect next (what shot, what reaction shot, what panning shot, and so on).

Whereas we were taught to take what we needed from a screen. Webpages, when they came, were no big leap.

Of course we were also young, inquisitive, and took pleasure in being adaptable – qualities that do seem to wane.

In any event, reading webpages has diddly-squat to do with understanding hypertext or TCP/IP.

Similarly, many of our parents well understood what a video tape was capable of doing.

The reason they struggled to program their VCRs was because they grew up in a world of wooden horses and plastic cars and just one fancy piece of electronics in the corner of the living room that at first they weren’t allowed to touch.

Are you already a luddite?

If you’re in your mid-to-late 40s and you believe your grandkids won’t be helping you with your household appliances in 30 years, ponder the following:

  • Do you spend fewer than 10 hours a day consuming content via a handheld digital device?
  • Do you still own a CD or DVD collection, or even an iTunes library?
  • Do you take a photo of every meal you have in a restaurant and then distribute it on social media?
  • Do you ever answer your front doorbell?
  • Do you take 546 portraits of yourself in front of every cultural landmark you pass, and know which is your good side, the right angle for your chin, and what’s your best filter?
  • Are you innately au fait with the rule of three?
  • When was the last time one of your memes went viral?
  • Do you answer your phone and/or leave voice messages?
  • (You actually have a landline?!)
  • Did you meet your last three partners on dating apps?
  • Has your Facebook account been dormant since 2016?
  • How often do you Snapchat something you’re ashamed of?
  • Do you fall asleep with your iPhone?
  • Can you even imagine sitting in front of adverts on the TV?

Sometimes you should be answering yes, and sometimes no.

Hopefully the questions speak for themselves. Most of us my age are already past it.

And this is just 2010-2020 technology. If you’re under 30 and you’re thinking “sure”, wait until you see what’s coming next…

The future is child’s play

My point is that what defines technological advances, eras, generations – and alienation – is not how the technology works.

It’s what people do with it.

A clue my friend is going to be metaphorically reaching into the befuddled darkness in his old age with the rest of us is he thinks none of the stuff in that list is new, and that it’s mostly stupid.

He doesn’t use Snapchat, he says, because he hasn’t got time, but anyway it’s just text messaging with pictures.

Posting photos of every dinner to Instagram is pointless, narcissistic, and distracting.

And so on.

Yes, perhaps I agree with him – but I would because I’m his age.

Our parents thought Manic Miner was a waste of time, too.

My father – who worked in Information Technology all his life – said I was in too much of a hurry to encourage him to get a home email address. Who was ever going to email him at home?

Whereas young people play with the new technology around them.

It’s not even new technology when you’re young. It’s just the world.

Their play may seem silly at first. But often they’re learning how to navigate the future.

Photographing your dinner seems ridiculous to those of us who made it to 46 without a daily visual record of what we ate.

But we weren’t cultivating multi-faceted media personalities from our pre-teen years with as large a footprint online as off.

I sent my friend a video this morning. It shows kids having fun using their AirPods as covert walkie-talkies in class:

My friend replied as follows:

A typically convoluted and wasteful solution. I’m sure they have great fun doing it, though.

(To get his tone, read his second sentence in the sarcastic voice of Basil Fawlty, rather than with the camaraderie of a Blue Peter presenter.)

His response illustrates why my friend will surely have to call out the droid training man six times before it’s finally packing away the grocery deliveries the way he wants it to.

Or why he’ll be one of the last to order an autonomous car that has a hot tub instead of a driver’s seat.

Or why he’ll never meet a partner on Tinder who will only make love after micro-dosing LSD.

Or why he’ll insist on sending text messages, rather than sharing head-vibes via an embedded emote transmitter-receiver.

Or why he’ll die of a heart attack because he hadn’t tracked his blood via a wearable monitor disguised as a signet ring.

Or whatever actually does come down the road; the challenges of tomorrow’s technology won’t look like those of today.

I don’t mean to make fun of my friend. I applaud his aspiration.

But he has got a solution for a totally different problem.

Categories
Technology

Would you rather be killed by a robot?

Few of us want to die, but we have a greater aversion to going one way than another.

A classic example is air travel. Despite being statistically far safer than driving, many more people are afraid of flying and it is air plane crashes that make the nightly news.

Of course the safety of air travel is what makes a rare calamity headline-worthy. Just another car crash caused by a sleepy, drunk, or texting driver will be lucky to make the local papers.

But there’s also something else going on.

Drivers – and perhaps even passengers – seem to accept their agency in a highway accident in a way that many airplane travellers do not. We feel helpless at 35,000 feet, but we suspend our disbelief. We’re equally helpless at 80mph on the motorway should a lorry jack-knife in front of us, but a few seconds before we felt like we were the kings of the road.

The difficulty in making an all-terrain level 5 autonomous car that’s fit for purpose has curbed the enthusiasm of those of who thought (/hoped!) we were on the edge of a self-driving revolution.

But the squishy calculus that we apply to fatal accidents could hold back a rollout even if sufficiently viable technology comes along.

Do Androids dream of L-plates?

What’s sufficient?

In the US, the National Highway Traffic Administration estimated that 36,750 people were killed in US traffic crashes in 2018.

If the entire fleet had switched over to autonomous vehicles on January 1 2019 and the number of deaths had subsequently dropped by one to 36,749 would it be celebrated as a success?

Unlikely – although the one extra person who lived to read that statistic at the end of the year might disagree.

Even leaving aside the many complicating factors absent from this illustration (noise in the statistical data, unintended effects such as greater alcoholism as everyone could now drink and ‘drive’, an increase in drug use or suicide among newly-unemployed truck drivers) we intuitively know the US public isn’t going to accept 36,749 robot-mediated driving deaths a year.

I don’t think the American public would accept 749 annual fatalities from buggy robot driving.

Perhaps not even 49.

Obviously this makes no logical sense, but there it is.

These questions will only amplify as AI migrates further off the desktop and the cloud and visibly into our daily lives.

  • Would you be happy if a robot lifeguard saved three elderly swimmers in difficulty over your six-year old child?
  • Would you chalk it up to statistically predictable bad luck if an AI screen for breast cancer gave you a false negative, even if you’d stood a lower chance of such an erroneous result than had a friendly-faced radiologist seen the same slide?
  • Should a robot driver head towards a fatal collision with an oncoming out-of-control vehicle, killing several, or instead swerve to crush a baby in a pram?

That last example is regularly trotted out in the insurance industry, where such issues aren’t just interesting after-dinner talking points.

Someone will have to be responsible if someone is going to pay.

But for most of us, the questions are mushier. We recoil at their asking, but the machines need to know what to do.

One option is to avoid AI, even if doing so leads to worse outcomes and perhaps tens of thousands of preventable deaths.

Another is to live in a world where we come to accept the odd random destruction or death from poor or faulty or simply inexplicable AI decisions in the same way ancient people sighed and chalked up heart attacks or plagues as evidence of the whims of capricious gods.

That sounds defeatist. But it’s arguably better than 36,750 Americans dying every year at the hands of human drivers because nobody wants to be killed by a bug.

Categories
Technology

Instagram: On the node

A candid photo exposes the reality behind so many aspirational Instagram photographs set in impossibly beautiful locations:

Norway’s Trolltunga: The Instagram myth
Trolltunga: The reality

CNBC notes:

A decade ago, fewer than 800 people a year traveled to Trolltunga. Next year, that figure’s expected to hit 100,000.

People queue with dozens of others, babbling and checking their phones, to be photographed standing at Trolltunga in meditation.

What’s going on?

The 1990’s interpretation: Cheap air travel and a generation that puts more of a premium on experiences than stuff are seeking out the world’s greatest places.

The 2000’s interpretation: The explosion of information on the Internet and the ubiquity of smartphones has made people more aware of where they can visit and what sort of experiences they should pursue.

The 2019 reality: Instagram has made every place a de facto node on a real-world physical network. Social media influencers and network effects drive superlinear traffic to the most popular nodes, which only increases their subsequent popularity.

Instagram will eat the world

Twenty years ago, the National Geographic could publish a photo of Trolltunga to the fleeting interest of a magazine browser. One or two might add Norway to their holiday lists.

Today’s aspirational Instagram user identifies Trolltunga as a resource. In consuming that resource – by visiting, photographing, and posting – they make a honey trap that attracts 100 more.

Hence the most popular spots are noded and overrun, and this kind of mathematics implies they’ll be impossible within an iteration or two.

Solutions?

  • Restricted access to the most popular nodes (quotas, dollars)
  • A counter-cultural trend towards more obscure nodes (at best a delaying tactic)
  • Simulcra nodes. A fake Grand Canyon. A 3D printed Taj Mahal. Machu Picchu remade for middle-class China to visit by train.
  • The Instagram craze dies down (unlikely)
  • Eventually we all live in the matrix, anyway

Many of these solutions sound phoney.

Are they phonier than the myth of Trolltunga today?