Microsoft HoloLens is the world’s first self-contained holographic computer. Remarkably, in Research Mode (opens in new tab), available in the newest release of Windows 10 for HoloLens, it’s also a potent computer vision research device. Application code can not only access video and audio streams but can also at the same time leverage the results of built-in computer vision algorithms such as SLAM (simultaneous localization and mapping) to obtain the motion of the device as well as the spatial-mapping algorithms to obtain 3D meshes of the environment. These capabilities are made possible by several built-in image sensors that complement the color video camera normally accessible to applications.
Specifically, HoloLens has four gray-scale environment tracking cameras and a depth camera to sense its environment and capture gestures of the user. As shown in Figure 1, two of the gray-scale cameras are configured as a stereo rig capturing the area in front of the device so that the absolute depth of tracked visual features can be determined through triangulation. Meanwhile the two additional gray-scale cameras help provide a wider field of view to keep track of features. These synchronized global-shutter cameras are significantly more light-sensitive than the color camera and can be used to capture images at a rate of up to 30 frames per second (FPS).
The depth camera uses active infrared (IR) illumination to determine depth through time-of-flight. The camera can operate in two modes. The first mode enables high-frequency (30 FPS) near-depth sensing, commonly used for hand tracking, while the other is used for lower-frequency (1-5 FPS) far-depth sensing, currently used by spatial mapping. In addition to depth, this camera also delivers actively illuminated IR images that can be valuable in their own right because they are illuminated from the HoloLens and reasonably unaffected by ambient light.
With the newest release of Windows 10 for HoloLens, researchers now have the option to enable Research Mode on their HoloLens devices to gain access to all these raw image sensors streams, shown in Figure 2. Researchers can still use the results of the built-in computer vision algorithms but can now also choose to use the raw sensor data for their own algorithms. The sensors’ streams can either be processed on device or transferred wirelessly to another PC or to the cloud for more computationally demanding tasks. This opens a wide range of new computer vision applications for HoloLens. In egocentric vision, HoloLens can be used to analyze the world from the perspective of a user wearing the device. For these applications, HoloLens abilities to visualize results of the algorithms in the 3D world in front of the user can be a key advantage. HoloLens sensing capabilities can also be very valuable for robotics where these can, for example, enable a robot to navigate its environment.
These new HoloLens capabilities will be demonstrated at a tutorial on June 19th, 2018, at the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR (opens in new tab)) in Salt Lake City. The next generation HoloLens depth sensing capabilities, which will also be made available through Project Kinect for Azure, will also be demonstrated at this tutorial.
We hope to see you there!
Learn more:
• HoloLens Research Mode documentation (opens in new tab)
• HoloLens Research Mode session at CVPR 2018 (opens in new tab)
• Alex Kipman’s Project Kinect for Azure blog on LinkedIn (opens in new tab)
• Register your interest in Project Kinect for Azure (opens in new tab)