At the ACM CHI Conference on Human Factors in Computing Systems (opens in new tab) conference in Glasgow, Scotland this May, researchers from Microsoft’s Redmond and UK labs, together with our university collaborators, will be presenting several papers and demos that explore how to design technologies more inclusively, to support accessibility by users with cognitive and/or sensory disabilities.
Microsoft researchers Adam Fourney, Kevin Larson, and myself teamed up with University of Washington researchers Qisheng Li and Katharina Reinecke to explore the accessibility of the Web to people with dyslexia. Dyslexia is a cognitive disability estimated to affect about 15% of English-speaking adults; people with dyslexia can experience varying degrees of difficulty with reading-related tasks. Because access to information on the Web is a key modern literacy skill, ensuring that online information is cognitively accessible is an important concern; beyond people with dyslexia, improving cognitive access to the Web may benefit other groups who experience reading challenges such as English language learners or children.
Spotlight: On-demand video
At CHI 2019, lead author and University of Washington graduate student Qisheng Li will present the Microsoft Research-UW team’s findings, summarized in their paper, “The Impact of Web Browser Reader Views on Reading Speed and User Experience. (opens in new tab)” The team explored whether the “reading mode” common in most modern browsers significantly impacted users’ reading speed and comprehension, and whether users with dyslexia specifically benefitted from this intervention. Using the “Lab in the Wild (opens in new tab)” infrastructure developed by Professor Reinecke, the team conducted an online study with 391 English-speaking adults (42 with dyslexia), in which participants read several popular webpages and answered associated reading-comprehension questions, some in the typical browser view and some in the reading mode.
As expected, people with dyslexia had substantially slower reading speeds than people without dyslexia; however, people with dyslexia did not seem to receive any differential benefit of the reading mode. Instead, the team found that reader view overall enhanced reading speed of all users by about 5%, as compared to the default website view. However, the study found that reader mode buttons are disabled by default, and that the rules governing the availability of reader mode are opaque to web developers. Only 41% of 1100 popular webpages sampled successfully enabled reader view. Our findings suggest that web page designers should develop their page in a way that enables the reader mode button in major browsers, so that users can have the option to reap this reading-speed benefit. Making it easier for web developers to intentionally enable the reading mode option as well as exploring which particular aspects of the reader view transformations provide the most benefit are key areas for future work.
Accessibility beyond the traditional desktop computing experience is also a focus of Microsoft Research’s contributions to CHI this year. Intern Yuhang Zhao, a graduate student at Cornell Tech, will present a paper summarizing joint research with Microsoft researchers Ed Cutrell, Christian Holz, Eyal Ofek, myself, and Andrew Wilson that explores how to enhance the accessibility of emerging virtual reality (VR) technologies: “SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. (opens in new tab)” The team will also present a live demo during the conference’s demonstration session and at the Microsoft booth; blog readers can experience a demo by viewing the project’s online video figure (opens in new tab).
Low vision (that is, visual disabilities that cannot be fully corrected by glasses) impacts 217 million people worldwide according to the World Health Organization. While desktop software offers some accommodation features for people with low vision (for example, screen magnifiers), VR systems have not yet grappled with the issue of accessibility for this audience. Indeed, when interviewing VR developers, the team found that none had received training or guidance on how to develop accessible VR experiences.
Because low vision encompasses a range of visual abilities (for example, tunnel vision, blind spots, brightness sensitivity, low visual acuity, and so on), the team took a toolkit approach—they developed SeeingVR, a set of 14 tools for Unity developers (Unity is one of the most widely-used VR development platforms). End-users can activate different combinations of these tools depending on their abilities and the context of the current application and task. Example tools include a magnifier and bifocal views, brightness and contrast adjustment for the scene, edge-enhancement to make virtual objects more salient from their backgrounds, depth measurement tools, and the ability to point at text or objects in a virtual scene to have them read or described aloud. The majority of these tools can be applied to existing Unity applications post-hoc, to support easy adoption.
Evaluation with 11 people with low vision completing a variety of tasks in VR (for example, menu selection, grasping objects, shooting moving targets) found that all participants could complete tasks more quickly and accurately when using SeeingVR tools as compared to the default VR experience. All participants chose different combinations of the available tools, reinforcing the value of allowing flexibility and customization of low vision accessibility options.
Microsoft Research researchers are also exploring non-visual representations of VR for people who are completely blind. Microsoft Soundscape (opens in new tab) is a smartphone application that uses spatial audio to deliver a rich, non-visual navigation experience. At the CHI 2019 workshop on “Hacking Blind Navigation” (co-organized by Principal Researcher Ed Cutrell), Microsoft Research intern and University of Washington student Anne Spencer Ross will present research on how to craft an audio-only VR experience that can allow people to rehearse a walking route virtually before experiencing the route in the physical world via Soundscape. Her paper, “Use Cases and Impact of Audio-Based Virtual Exploration (opens in new tab)” is a collaboration between engineers from the Soundscape team (Melanie Kneisel and Alex Fiannaca) and researchers in the Microsoft Research Redmond Lab (Ed Cutrell and myself). Melanie Kneisel will also be a featured speaker at the workshop.
In addition to presenting research on accessible Web browsing and accessible VR, researchers from Microsoft’s Cambridge, UK lab will be sharing a tangible toolkit to enhance the accessibility of computer science education for children who are blind. Led by researcher Cecily Morrison, the CodeJumper (opens in new tab) project is a physical programming language for teaching children ages 7 – 11 basic programming concepts and computational thinking regardless of level of vision. It was inspired by the need to provide a way for young blind and low vision students to access the computing curriculum inclusively alongside their sighted peers. Children plug together pods that represent lines of code in a program to create programs that when run make music, stories, or poetry. Children can start with very simple concepts—such as, a program is a sequence of commands—and progress to complicated program flows that utilise variables, covering the whole of the curriculum for this age band. It was successfully tested with 75 children and 30 teachers across the United Kingdom and found to support age-appropriate learning of coding as well as encouraging whole-child learning, such as creating friendships with sighted peers. The tangible CodeJumper kit will be available for CHI participants to experience during the conference’s demo session.
We look forward to seeing you at CHI 2019 in Glasgow and sharing ideas and advancing the accessibility conversation together.