Microsoft Ability Initiative: A collaborative quest to innovate in image captioning for people who are blind or with low vision

已发布

作者 , Sr. Principal Researcher & Research Area Manager

From left to right: Danna Gurari, University of Texas; Ed Cutrell, Microsoft Research; Roy Zimmermann, Microsoft Research; Meredith Ringel Morris, Microsoft Research; Ken Fleischmann, University of Texas; Neel Joshi, Microsoft Research

From left to right: Danna Gurari, University of Texas; Ed Cutrell, Microsoft Research; Roy Zimmermann, Microsoft Research; Meredith Ringel Morris, Microsoft Research; Ken Fleischmann, University of Texas; Neel Joshi, Microsoft Research

Microsoft is committed to pushing the boundaries of technology to improve and positively influence all parts of society. Recent advances in deep learning and related AI techniques have resulted in significant strides in automated image captioning. However, current image captioning systems are not well-aligned with the needs of a community that can benefit greatly from them: people who are blind or with low vision.

We recently completed a competitive process to find an academic research team to work with on changing that. We’re excited to partner with The University of Texas at Austin for our new Microsoft Ability Initiative. This companywide initiative aims to create a public dataset that ultimately can be used to advance the state of the art in AI systems for automated image captioning. We recently spent two days with the research team in Austin to kick off this exciting new collaboration.

Microsoft research podcast

Abstracts: August 15, 2024

Advanced AI may make it easier for bad actors to deceive others online. A multidisciplinary research team is exploring one solution: a credential that allows people to show they’re not bots without sharing identifying information. Shrey Jain and Zoë Hitzig explain.

Microsoft researchers involved in this effort have specialized experience in accessible technologies, human-centric AI systems, and computer vision. These researchers’ efforts are complemented by colleagues in other divisions of the company, including the AI for Accessibility program (opens in new tab), which helps fund the initiative, and Microsoft 365 accessibility. The Microsoft Ability Initiative is one of an increasing number of initiatives at Microsoft in which researchers and product developers are coming together in a new, cross-company push to spur innovative and exciting new research and development in the area of accessible technologies.

“We are excited about this new initiative,” said Wendy Chisholm, Principal Program Manager with the AI for Accessibility program at Microsoft. “The goal of creating public data resources that can accelerate innovations with AI that empower people who are blind or with low vision is a fantastic example of the kind of impact Microsoft hopes to have through its AI for Accessibility program (opens in new tab).”

UT Austin stood out last year from a select number of universities with specialized experience invited to participate in the competitive process to identify an academic partner for the initiative. Principal investigator Professor Danna Gurari and Professor Kenneth R. Fleischmann are leading the team at UT Austin, which also includes several graduate students.

Professor Gurari has a previous record of success in creating public datasets to advance the state of the art in AI and accessibility, having co-founded the VizWiz Grand Challenge (opens in new tab). The UT Austin team, which we’ll collaborate with over a period of 18 months, plans to take a user-centered approach to the problem, including working with people who are blind or with low vision to better understand their expectations of AI captioning tools. The team also plans to launch community challenges to engage a broad swath of researchers and developers to build these next-generation tools.

“I hope to build a community that links the diversity of researchers and practitioners with a shared interest in developing accessible methods in order to accelerate the conversion of cutting-edge research into market products that assist people who are blind or with low vision in their daily lives,” said Gurari.

a grandfather, mother, and two children—and they are dressed in Harry Potter costumes.

A state-of-the-art vision-to-language system labeled this image as “a group of people posing for the camera.” While not incorrect, the caption excludes many of the details that are compelling about the image, such as the fact it comprises a family—a grandfather, mother, and two children—and they are dressed in Harry Potter costumes. Training AI systems to provide more detailed captions that can offer a richer understanding of images for people who are blind or with low vision is an important goal of this new research initiative.

This collaboration with UT Austin builds upon prior Microsoft research that has identified a need for new approaches at the intersection of computer vision and accessibility. Such work includes studies on how end-users who are blind interpret the output of AI image labeling systems (opens in new tab) and the types of detail missing from automated image descriptions (opens in new tab). We’ve also built a prototype exploring new techniques for interacting with image captions (opens in new tab) that takes advantage of more detailed and structured caption content future AI systems may provide. Our prior research has identified many key challenges in this realm, and we’re looking forward to working with UT Austin to make strides toward actionable solutions. Our Cognitive Services (opens in new tab) and Azure cloud computing (opens in new tab) resources provide a technical foundation that will support the joint research effort.

Professor Gurari noted that the initiative will not only advance the state of the art of vision-to-language technology, continuing the progress Microsoft has made with such tools and resources as the Seeing AI mobile phone application (opens in new tab) and the Microsoft Common Objects in COntext (MS COCO) dataset (opens in new tab), but it will also be a teaching opportunity for students at UT Austin.

“I love to see the excitement in so many of my students when they realize that they can use their skills to make a difference in the world, especially for people who are blind or with low vision,” she said.

We came away from our meetings at The University of Texas at Austin even more energized about the potential for this initiative to have real impact in the lives of millions of people around the world, and we couldn’t be more excited. We expect at the end of this joint effort that the broader research community will leverage the new dataset to jump-start yet another wave of innovative research that will lead to new technologies for people who are blind or with low vision.

继续阅读

查看所有博客文章