Virtual reality technology has become popular in recent years thanks to the availability of more affordable systems. Today, we can put on a head-mounted display and find ourselves in a completely different world, a virtual one in which the more we explore, the more new environments we can see and hear. However, how real a VR experience feels is quite dependent on our sense of self—and others—in the virtual world we have entered, an awareness realized in the form of avatars. How connected do we feel to our avatars? How realistic are the avatars around us—do they fit the style of the environment, behave how we expect, smile back at us? The believability of our surroundings is enhanced by these representations, which makes exploring answers to questions like these and others important to the continued advancement of the technology.
To empower research and academic institutions around the world to further investigate the relationship between people and their avatars and how it affects interactions with others in the virtual world, Microsoft is making the Microsoft Rocketbox library—a collection of 115 avatars representing humans of different genders, races, and occupations—a publicly available resource for free research and academic use. Microsoft Rocketbox can be downloaded from GitHub. The release of the library coincides with last week’s celebration of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) and the presentation of our latest in avatar research, an active area here at Microsoft Research.
The flexibility of the Rocketbox library
Microsoft research podcast
Collaborators: Silica in space with Richard Black and Dexter Greene
College freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.
Originally developed by Rocketbox Studios GmbH and later supported by Havok, which Microsoft acquired in 2015, the Microsoft Rocketbox avatar library represents extensive work in research and prototyping conducted over a 10-year span. Rocketbox Studios released its first library of avatars, “Complete Characters,” in 2005. A new generation of highly detailed avatars and animations named “Complete Characters HD” was developed and released from 2010 to 2015. This is the library we’re now making public.
The library’s avatars are rigged, or equipped with an internal skeleton to allow easy animation, and have been used in laboratories worldwide. They’re designed to give researchers needed flexibility. For example, their geometry is designed to enable mixing heads and different bodies easily, as well as mixing and matching texture elements and outfits of different characters. Thanks to the common skeletal configuration across all the avatars, it’s also possible to use animation sets across all characters of the library without a need for tedious modifications. Facial animations, including eyebrows and lips, look correct on character variants. Because of these features, as well as the diversity of the characters and a selective poly resolution on the meshes, the library has been a popular research tool, used for such applications as VR, crowd simulation, and real-time avatar embodiment.
Avatar embodiment
Avatar embodiment is one field of avatar research that the release of the Microsoft Rocketbox library will further enable. When you try to look at your body in virtual space, the near-eye display of the headset blocks your view of the real world and—with it—your view of your own body. Many VR applications still deploy such a mode, where the user is without a visible body—essentially, disembodied—and only the floating image of the handheld controllers is seen, creating a ghostlike quality. Such a mode has been widely adopted because it doesn’t interfere much with individuals’ “place illusion”—that is, the feeling that they’re located in a new place—and does a decent job at creating a “plausibility illusion,” or belief that events happening in the virtual environment are really happening around them.
Place illusion and especially plausibility are, however, much stronger when the participant looks down and is embodied in an avatar that replaces their body (see Figure 1) and moves as they move. People largely prefer such a liberating experience, and as a result, there’s an increase in the sense of presence they experience. Research has shown that our brain is flexible to accept this new artificial visual body as its own if it behaves similarly enough to information the brain gets from other senses. For example, if the avatar is following our motions or if we feel an object touching our skin in synchronization with a virtual object touching the avatar, our brain tends to fuse this signal and declare ownership of the avatar as its own body. This effect is called “embodiment” or “body ownership.”
A full review of embodiment and other applications of avatars, together with a technical reference of the importance of rigged avatars, is in preparation with authors from laboratories and tech companies from around the world who support the library release. This review and position paper, “Importance of rigging for procedural avatars. Microsoft Rocketbox a public library,” will be released in the coming weeks.
IEEE VR 2020
Apart from the Microsoft Rocketbox release, we also presented our latest research on avatars last week at IEEE VR.
In the best paper award finalist “The Self-Avatar Follower Effect in Virtual Reality,” we explored how agency of the body can be disrupted and how sometimes people start copying what their avatar does, a new theory we’re calling the follower effect. This work is part of a larger interest in the field of avatar motion control. In previous experiments, we found that the brain may accept body ownership even if the motion may look similar but stray a bit from the correct motion. Applications of this work range from rehabilitation and accessibility to improvement of haptics. For example, we’ve been able to generate a strong sense of haptics by guiding the individuals’ hands to touch real objects, creating redirected haptics.
In another paper, “Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification,” we were able to augment self-identification with avatars and demonstrate the importance of animating the facial expressions of avatars. We show new ways to achieve strong enfacements and to generate avatar enfacement during social interactions. This is an important contribution that will help create better identification with our digital representations even if, as we’ve found, they don’t look like us.
The work presented last week at IEEE VR lies at the intersection of many fields beyond computer science and human-computer interaction, such as neuroscience and psychology, and shows the relevance of avatars to the future of virtual reality. The release of the Microsoft Rocketbox library will further enable worldwide laboratories using these avatars to advance the state of the art of a range of topics, including bystander behavior, obedience to authority, body ownership illusions, and crowd simulations.