2019年2月25日 - 2019年2月26日

Microsoft AI for Accessibility Sign Language Recognition & Translation Workshop

9:00 AM – 8:00 PM, 9:00 AM – 3:00 PM

地点: Microsoft Research Redmond, Building 99, room 1919 (1927 and 1915 available for breakouts if needed)

Organizers:
Danielle Bragg, Postdoctoral Researcher
Meredith Ringel Morris, Principal Researcher
Mary Bellard, Senior Accessibility Architect

Summary:
Developing successful sign language recognition and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, linguistics, and Deaf culture. In an effort to support people who are Deaf and Hard-of-Hearing to communicate in the language of their choice in more scenarios, we want to better understand advancements in sign language recognition and translation, and explore what is possible in this area of technology.

Microsoft is bringing together a diverse group of experts with relevant skills for a 2-day workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation. A main goal for the workshop is to produce a map of the current landscape and a set of challenges for the community to tackle next.

Date: Monday, February 25, 2019 – Tuesday, February 26, 2019

Time: 9:00 AM – 8:00 PM, 9:00 AM – 3:00 PM

Location: @ Microsoft Research Redmond, Building 99, room 1919   (1927 and 1915 available for breakouts if needed)

Monday, February 25, 2019

08:00 AM Bus leaves from hotel for workshop location
08:30 AM  Breakfast Available, Check-in  
09:00 AM  Welcome Mary Bellard 
09:30 AM  Ice Breaker

 

 
10:30 AM  An Introduction to Deaf Culture 

Abstract: This introduction will briefly cover all elements of Deaf culture, language, accessibility, and cultural competence.  With so many products and technology developed out there by non-Deaf people, the Deaf community is struggling with a “disability designation” when they face barriers that could have been avoided if the developers took Deaf people in consideration for universal design to keep our world accessible to all people and empowering to Deaf people themselves.

Lance Forshay
11:15 AM  Designing Technology for Sign Languages and their Communities 

Abstract: Language is a technology of the human body. Languages emerge and evolve to most efficiently communicate what their users need, can use and will use. Originating in deaf communities around the world, sign languages exploit the capacities of the visual-manual modality and the grammatical use of body-space configurations for communicative function. Users of sign language, however, are a diverse group of individuals, especially with respect to hearing status and linguistic competence and experience. This diversity impacts how sign languages are used in the practice of everyday life, which one needs to account for in designing sign language tools. This talk covers the key elements of sign language structure that are relevant for designing language technologies involving sign language and their users.  

Deniz Ilkbasaran 
12:00 PM  Lunch   
01:00 PM  Putting Words into Computers 

Abstract: The field of natural language processing (NLP) is now delivering powerful technologies: computers can translate from one language to another, answer questions, and hold conversations.  In this relatively non-technical talk, I will trace the evolution of one aspect of NLP programs: how we put English (or other-language) words into computers.  While this talk won’t teach you everything about NLP, it will illuminate some of its toughest challenges and most exciting recent advances. 

Noah Smith 
01:45 PM Computer Vision Meets Speech Recognition: Challenges and Recent Development of Sign Language Recognition

Abstract: This talk will present recent advances in the field of sign language recognition observed from an interdisciplinary view point at the intersection of speech recognition and computer vision. We will show several examples, analyze available data sets and trending methods. We will also understand what challenges remain to be tackled.

Oscar Koller
02:30 PM  Break   
03:00 PM  Learning from Human Movements to Create Accurate Sign Language Animations 

Abstract: There is great diversity in the levels of English reading skill and in the language preferences among members of the U.S. Deaf Community, and many individuals prefer to receive information in the form of American Sign Language (ASL). Therefore, providing ASL on websites can make information and services more accessible. Unfortunately, video recordings of human signers are difficult to update when information changes, and there is no way to support just-in-time generation of website content from a user request. Software is needed that can automatically synthesize understandable animations of a virtual human performing ASL, based on an easy-to-update script as input. The challenge is for this software to select the details of such animations so that they are linguistically accurate, understandable, and acceptable to users.  This talk will provide an overview of Huenerfauth’s research in using machine-learning techniques to model human movements. His methodology includes: video and motion-capture data collection from signers to collect a corpus of ASL, linguistic annotation of this corpus, statistical modeling techniques, animation synthesis, and experimental evaluation studies with native ASL signers. In this way, his laboratory has found models that underlie the accurate and natural movements of virtual human characters performing ASL. In recent work, his laboratory has created models to predict essential speed and timing parameters for such animations. 

Matt Huenerfauth 
03:45 PM  Crowdsourcing Sign Language Data through Educational Resources and Games
Abstract: Sign language users lack many fundamental resources. At the same time, computer scientists working on sign language modeling and translation often lack appropriate training data. In this talk, I present the opportunity to design sign language resources that simultaneously meet community needs and collect large corpuses of sign language data to support computational efforts. I will demonstrate this potential through three main systems: 1) ASL-Search, a feature-based ASL dictionary trained on crowdsourced data from volunteer ASL students, 2) ASL-Flash, a site that both helps people learn ASL and collects feature evaluations of signs, and 3) ASL-Video, a platform for collecting sign language videos from diverse signers. This is joint work by researchers at Microsoft Research, the University of Washington, and Boston University.
Danielle Bragg 
04:30 PM  Break   
04:45 PM  Panel: Technology and Sign Language Users  Moderator: Larwan Berke 

Panelists: Michael Anthony, Lance Forshay, Leah Katz-Hernandez, Christian Vogler 

05:30 PM  Reflection and Breakout Planning   
05:45 PM  Board bus from workshop to restaurant   
06:00-08:00 PM  Banquet  (by invitation only)  

 

Tuesday, February 26, 2019

08:00 AM Hotel guests board bus to workshop location
08:30 AM  Breakfast Available   
09:00 AM  Task for the Day Danielle Bragg 
09:30 AM  Breakout Sessions  
12:00 PM  Lunch  
01:00 PM  Breakout Results Presentations  
02:30 PM  Discussion and Next Steps  Meredith Ringel Morris
03:00 PM  End