Non-Visual Mainstream Smartphone and Camera Interactions for Blind and Low-Vision Users

Mainstream smartphones, their on-board sensors and increasing computational power, and remote human and automated web services show immense promise for improved independence and daily accessibility for blind people. Though specialized services and technologies have their important uses, there is a trend to move away from using specialized technology and towards mainstream devices for sustainability, affordability, and range of access.

In this work, I investigate multimodal interaction models and software for mainstream accessibility for blind and low-vision people, for purposes of everyday convenience and practicality, aestheticism and creativity, social inclusion, and education. The study of such mainstream multimodal interactions gives rise to more general interesting research questions that can affect not only blind people but the larger community. How do we design and use mainstream technology as optimally and effectively as possible to include a diverse user population? What can we learn from the way people already use and appropriate mainstream technology for specialized uses? I present work on accessibility via mainstream technology using audio, haptics, and camera focalization (i.e., focusing and localizing objects in the user’s environment).

I show how non-visual interactions via mainstream technology can improve the independence and lives of blind and low-vision people by:

  1. Understanding the use of mainstream technology by people with disabilities through interviews, a diary study, and a large camera survey,
  2. Designing non-visual camera interaction techniques coupled with automated computer vision on mainstream smartphones for blind and low-vision people,
  3. Building motivated prototype applications which take advantage of said interaction techniques, including two object focalizing applications and a portrait framing application, and
  4. Conducting user studies on specific prototype applications that I designed with blind and low-vision users as part of the process to evaluate the efficacy of the interaction techniques and usefulness of the applications.

Speaker Bios

Chandrika Jayant received her B.A. in Mathematics and Computer Science, graduating summa cum laude, from New York University in 2004. She then attended the University of Washington (UW) and worked with Professor Richard E. Ladner, receiving her M.Sc. in Computer Science & Engineering in 2006, and her Ph.D. in 2011 after working at the IBM Tokyo Accessibility Lab and Microsoft in between. Chandrika was awarded an NSF scholarship for 3 years, an ARCS fellowship, and a departmental fellowship at UW. One of her papers was named Best Paper in UIST 2010, and she co-wrote a recently funded Department of Education Steppingstones Grant. Chandrika has participated in many service and leadership roles including conducting workshops with the National Federation of the Blind, volunteering at DO-IT, and mentoring as part of UW’s Summer Undergraduate Research Program. She has been a weekly DJ at Seattle independent radio station KEXP for the past 3 years.

Date:
Haut-parleurs:
Chandrika Jayant
Affiliation:
University of Washington