Editor’s Note : Lydia Neo is the co-founder and Director (Business Development) of Helios Media Design Pte Ltd, a leading Singapore based design solutions firm, with a special focus of combining creative thinking, with your business objectives.
Microsoft’s HoloLens is no joke. Many people have now tried the company’s latest revision of its unreleased augmented reality headset and even built an app for it. The new hardware, which Microsoft also showcased during its Build developer conference keynote, feels very solid and the user experience (mostly) delivers on the company’s promises.
Post that keynote, Microsoft gave developers and some of us media pundits a chance to spend some quality time with HoloLens by building our own “holographic application” using the Unity engine and Visual Studio.
HoloLens is all about augmented reality. It’s about placing objects into the real world, which you can still see while you’re wearing the headset. It’s not a virtual reality headset like the Oculus Rift, so it’s not about total immersion. Instead, it lets you see objects on a table in front of you that aren’t there in the real world, for example, and it lets you interact with them as if they were real objects.
When you first see somebody who is using HoloLens, you’ll probably think something is wrong with them. They’ll walk around things you can’t see, make random click gestures in the air (“air-tapping” is what Microsoft calls that) and probably ooh and ahh a few times.
Until you’ve used HoloLens — and very few people have — this all sounds very abstract. But once you strap it on and use it, it’s indeed a bit of a revelation.
The early (and highly positive reviews) from Microsoft’s January event, where a few select members of the press got to try it, weren’t an exaggeration.
Programming a hologram sounds like something that should be done with some kind of special cybergloves on a computer the size of a ‘60s IBM mainframe. But at Build 2015, Microsoft has been quietly taking developers through the “Holographic Academy,” a 90-minute training session that teaches them the basics of building projects for its HoloLens augmented reality headset. And it turns out that basic hologram creation is, if not exactly straightforward, at least pretty understandable.
Technical details aside, the biggest difference between HoloLens and any other platform is the amount of information it collects. Microsoft is secretive about what exactly is in the device, but among other things, it can accept voice commands, read very simple finger gestures, and scan rooms well enough to build a detailed, fairly accurate depth map.
This is all fed into the Unity game engine, which announced augmented reality support earlier this week. Microsoft is pushing the idea of truly universal apps — which include HoloLens editions — at Build, but at the training session we created a HoloLens-specific Unity project, exported it through Microsoft’s Visual Studio, and loaded it directly onto the device via Micro USB.
The game wasn’t the most impressive HoloLens demo around, compared to more complex experiences like Holo Studio or the Mars Rover. It was a little floating island of notepaper and origami, with two paper balls floating above it. The individual pieces, like art, music, and specific scripts, had been created already — some specifically for this project, some as part of a more general “HoloToolkit.” Our job was to bring a partially-assembled Unity project to completion, creating a series of increasingly complex behaviors.
There wasn’t any real programming in the Holographic Academy, but it was easy to at least look through the code, and putting the game together felt like building any beginner’s tutorial in Unity. Instead of placing a virtual camera, you place a marker that represents the user’s head. Instead of assigning behaviors when someone moves or clicks the mouse, you assign them when someone looks at an object and makes an “air tap.” And so on.
Voice and motion control aren’t new to anyone who’s played a Kinect game, and gaze-tracking — the headset’s ability to see where its wearer is looking — is a central component of VR. But being able to “see” the real world changes everything. The HoloLens origami collection was just a group of disembodied objects floating in space, but by the time we were done, you could tap to place the notebook on a table, say “move, ball” (or another phrase of your choice) to drop the two paper balls, then watch them roll off the table, onto the floor, and around each other. As long as you’ve created objects that will respond to basic physics, the HoloLens sensors can do the rest.
Being able to program something on HoloLens doesn’t mean you know anything about its inner workings. Microsoft won’t say much about the technology that turns those simple rendered objects into startlingly realistic projections, and the software development kit just acts as an intermediary. It also doesn’t mean you can design a good HoloLens game.
There are probably going to be a lot of holographic Rubik’s Cubes and puzzle boxes in the early days of the platform, except that you’ll only be able to move them by staring and tapping. But just understanding the basics of building for such an unfamiliar platform makes it seem more real — and, despite all its problems, more viable.