Augmented reality and the problem of obsolescing the laptop computer

Whatever laptop computer you’re using today, it likely isn’t very different than what you carried around 20 years ago. Sure, it’s lighter, more powerful and undoubtedly looks better and has a few new features – like working biometrics – but it’s still basically a clamshell laptop computer.

Processors have gotten smaller, batteries more capable, and we did try to replace the laptop – first with an iPad and then with a Phone. Neither effort took, though Apple is signaling they want to take another stab at turning the iPad into a laptop.

There are two things that gate our ability to dramatically change the device we carry around to do real work on: the display and the keyboard. These two things aren’t only hard to change, they define the size of the product we carry. The display is the more critical path because while we could make smaller keyboards or use foldable external keyboards, we don’t want to shrink the display. In fact, we want to make it bigger.

This is where AR comes in. If we could put the display closer to our face and attach it to our head, we could virtually make it as large as we want (only limited by the quality of the micro-display we are using)…and then we could work on rethinking the keyboard. So why aren’t there a bunch of people using Magic Leap or some other enterprise-class AR device instead of a laptop computer?

There are three things preventing this move: existing habit, the lack of a trailblazer and the lack of an AR solution that does occlusion well.

Habit

We really don’t like change. If you look at the keyboard layout we currently use it was optimized for early typewriters. Then, the concern was that if you hit two keys close together at around the same time it would jam…but we’re still using the same design. This showcases how resistant we are to change what we’ve grown comfortable with.

It is possible to can get around habits. Recall that in the 1990s we wanted phones with keyboards, and we thought screen phones were stupid. Now we all mostly use screen phones even though they’re less safe, less secure and far less robust than the business smartphones with keyboards they replaced.

We got there because of a massive marketing effort largely by Apple that got us to see smartphones differently. Now there are only a handful of us using the still more secure, still more robust and still safer smartphones with keyboards.  

But to move the market someone must change what the market sees as a ble device, and even Apple may not be able to do that anymore.

Lack of a trailblazer

To make a change, someone with market power must drive that change. Google attempted to start the process with Google Glass, but the execution was so bad that the AR market is still trying to get over its horrible first impression. But what is the PC you’d connect to a dedicated AR solution? Ideally it’d be a small wireless box with a foldup keyboard or some other space-optimized input device (speech?).

But now point to something like this in market. We had wearable computers at one time, but that market got nearly wiped out years ago for a variety of reasons (and it was very niche). Microsoft’s HoloLens is the closest we actually get as a self-contained device, but it’s expensive, it places all the weight on the head, and it really isn’t designed for a general user (it’s targeted at point solutions like exploration, manufacturing, maintenance and training). [Disclosure: Microsoft is a client of the author.]

The parts are there it just hasn’t been optimized for a general use case.

Currently, there’s really no one of scale even messing with this type of solution for the general desktop.

Occlusion

One of the reasons for this is that current AR technology doesn’t occlude well. That means when you create a virtual object, it’s somewhat transparent and looks a bit like a ghost to your eye. That’s fine if you are trying to help someone do a repair on a physical object or help them assemble a product, but people don’t seem to like transparent monitors. They want the image to be solid.

This may be something you could train people to live with but, for now, the AR solutions just don’t do virtual monitors or TVs that well. If you could better occlude the background and make the virtual monitors more like actual monitors, that would change. And this problem is being worked, which means we could eventually get to a solution. But we aren’t even close to being there yet.

I think AR will eventually allow us to rethink the laptop computer and create something that’s smaller, lighter, has more battery life and has a larger virtual screen than even this 49” monster I have on my desk. Bigger is always better and currently many of us not only have a heavy laptop, we are carrying one or two portable screens along with it just to get the screen real estate we want and need.

AR glasses could change this dramatically, but to do so some company will have to invest in driving the change, they’ll have to trailblaze the solution (which is really risky) and they’ll have to create AR glasses that actually occlude the background, not just obstruct it.

I know the technology is coming, I just don’t yet see anyone stepping up to drive the change. And until that critical step is taken, the market won’t move. Given China is attempting to take over the technology lead in the world, were I a betting man, I’d bet that this change will come from that country first.

We’ll see. Until then, like you, I’ll be waiting for a mobile PC that is truly mobile.

This article is published as part of the IDG Contributor Network. Want to Join?

Add Comment