Accessible systems for sign language computation with Dr. Danielle Bragg

Publié

head shot of Danielle Bragg for the Microsoft Research Podcast

Episode 118 | June 10, 2020

Many computer science researchers set their sights on building general AI technologies that could impact hundreds of millions – or even billions – of people. But Dr. Danielle Bragg (opens in new tab), a senior researcher at MSR’s New England lab (opens in new tab), has a slightly smaller and more specific population in mind: the some seventy million people worldwide who use sign languages as their primary means of communication.

Today, Dr. Bragg gives us an insightful overview of the field and talks about the unique challenges and opportunities of building systems that expand access to information in line with the needs and desires of the deaf and signing community.

Related:


Transcript

Danielle Bragg opening quote: As machine learning becomes more powerful, having data to train those models on becomes increasingly valuable. And, in working with minority populations, we’re often working in data-scarce environments because the population is small and there might be other barriers to collecting data from those groups in order to build these powerful tools that actually can really benefit these minority communities.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Many computer science researchers set their sights on building general AI technologies that could impact hundreds of millions – or even billions – of people. But Dr. Danielle Bragg, a senior researcher at MSR’s New England lab, has a slightly smaller and more specific population in mind: the some seventy million people worldwide who use sign languages as their primary means of communication.

Today, Dr. Bragg gives us an insightful overview of the field and talks about the unique challenges and opportunities of building systems that expand access to information in line with the needs and desires of the Deaf and signing community. That and much more on this episode of the Microsoft Research Podcast.

Host: Danielle Bragg, welcome to the podcast.

Danielle Bragg: Thank you. It’s great to talk to you.

Host: I like to start by situating both my guests and their labs and let’s start with your lab. You’re a senior researcher at MSR New England and you’re hanging out in Cambridge, Massachusetts? Tell us about the work going on in the lab in general and why it’s important. What gaps – or research gaps – do the folks in Cambridge fill?

Danielle Bragg: Yeah, we’re located in Cambridge. The lab is a very interdisciplinary place to be. We have a lot of people from different fields, not just computer scientists, but we also have economists, people who work in theory and social science, which makes it a really interesting place to work. We work on problems that are both technically challenging and have societal impact. So, the collection of skills that we have really fits that mission. We have machine learning experts, economists, theory folks as well as social scientists all working side-by-side, which makes it a really rich place to work.

Host: Yeah. Well, let’s situate you now. Many of my guests locate themselves at an intersection. Where do you live, Danielle? What are your particular research interests and passions and what gets you up in the morning?

Danielle Bragg: Yeah, so my work lies within human-computer interaction, or HCI, but it also fits under accessibility and applied machine learning, so kind of the intersection of those three, I would say.

Host: What is particularly interesting to you right now as you are looking at the sort of broad picture of the research you’re doing?

Danielle Bragg: My work primarily focuses on building systems that expand access to information, in particular for people with disabilities and, in the last few years, in particular for sign language users. So I’m a computer scientist and, as we discussed, I fall under the general umbrella of HCI, but also touch on other fields and I would say I’m personally very motivated by problem solving and by working on problems that I feel have some positive impact in the world.

Host: Well, before I talked to you earlier, I have to admit, I had some big gaps in my understanding of what sign language actually is and how it works, and perhaps some of our listeners are the same. So, give us a bit of a primer on this very singular language. How and why does ASL pose unique challenges for technical applications that other languages don’t, and how do you answer the question, since most Deaf people can see and read, why not just use English?

Danielle Bragg: Yeah, those are great questions and a great place to start. So ASL stands for American Sign Language, for those listening who haven’t heard the acronym before, and ASL is a natural language. It has its own grammar and vocabulary just like English or any other spoken language. There are actually many different sign languages used around the world, which some people may not realize, and American Sign Language is the primary language of the Deaf community in the United States, specifically, as well as a few other areas around the world. There are a number of linguistic features that make up sign languages just like there are linguistic features that make up spoken languages. For ASL, there is hand shape, location of the hand on the body, and movement. Those are the three primary types of features, but there are a whole host of other features that are also important. For example, non-manuals which include facial expressions and other types of body gestures. There’s finger spelling, classifiers, depictions, where you’re kind of acting out certain content. It’s a really beautiful language and there’s a really rich culture centered around it, just like there’s rich culture centered around other languages around the world.

Host: So interestingly, I wasn’t even going to ask this, but as you bring it up, I’m thinking to myself, are there idioms within the language? Is there slang within the language? Are there things that are outside the normal sort of structural grammar of the language as it evolves with people and, and generations?

Danielle Bragg: Yeah. Yeah, there definitely are. There are different dialects used by different sub-populations. There are also just really rich genres of literature. There’s Deaf poetry. There’s certain types of stories that people like to tell and because the language is visual, there’s a lot of richness there that you don’t really get with spoken languages. But I should also give a disclaimer. I’m not deaf myself and I’m still learning a lot about this space. I’ve taken some ASL classes and learned about Deaf culture and the Deaf community, but, you know, I don’t have a lifetime of experience so I’m always learning as well.

Host: Just as a point of interest, is there Chinese Sign Language? Is there Spanish Sign Language, French Sign Language? Or is it that granular?

Danielle Bragg: Yes. There’s Greek Sign Language, British Sign Language, French Sign Language… there are many different sign languages across the world.

Host: Okay.

Danielle Bragg: However, American Sign Language is actually more closely related to French Sign Language than it is to British Sign Language so the relationships between the sign languages don’t always mirror, exactly, the relationships between…

Host: Spoken languages….

Danielle Bragg: …the spoken languages.

Host: Interesting.

Danielle Bragg: And that’s because, you know, there’s a different history and evolution and the groups of people who are using those languages mixed in slightly different ways, but they are basically geographically situated because people who physically live near one another talk to one another more.

Host: Right, right, right. I’ve got my mouth open just like, I didn’t know that I didn’t know that either. We’re off to a great start! Well, before we get into your technical work, I think it’s really important to understand who you’re doing it for and why you’re doing it and we sort of alluded to that already, but when we talked before, you mentioned two groups: Deaf signers and people who have hearing loss but don’t sign. So, you’re addressing two, sort of, populations there. More interestingly to me though, is that fact that you frame this whole thing in terms of culture. So, I’d like you to talk about your goal of improving technical accessibility for the two main groups and how that plays out, but maybe you could help us understand the cultural aspects first?

Danielle Bragg: So the Deaf community has a really rich culture and ASL is a very important part of that culture and in this conversation we’re focusing on ASL because we’re here in the US and that’s where most of my work is focused, but a lot of this applies to other sign languages as well. And within the Deaf community, ASL has a really sacred place, I would say. It’s a really beautiful language and it’s kind of the binding glue for the community in many ways, and a lot of my work focuses on helping to preserve ASL and supporting people who want to use ASL. So, a lot of my work is about supporting sign language use and supporting people using it in their interactions with technology. Being deaf is a point of cultural pride for many people so, many people who are deaf don’t view themselves as disabled. They view being deaf as a cultural identity. If you’re deaf, you can still do anything that anyone else does. You can go to the store, you can drive a car, you can go to work, but the communication piece is where the barriers come into play and communication is central to culture, right? So, people who share a language develop cultural connections with one another in a different way.

Host: Well, you’ve put the big picture of building accessible information systems in a data-driven frame. So, talk about this approach, writ large, and how it’s informing the specific projects and papers you’re working on since data is central to the technological approaches that many of you are working on right now.

Danielle Bragg: Yeah, data is central to a lot of technologies that are being developed. As machine learning becomes more powerful, having data to train those models on becomes increasingly valuable. And, in working with minority populations, we’re often working in data-scarce environments because the population is small and there might be other barriers to collecting data from those groups in order to build these powerful tools that actually can really benefit these minority communities. And so, in my work, I try to build data-driven solutions and in doing that, I often will try to actually collect data in a system that is also providing benefit to the community. So, we don’t have to go to the community and say, oh, give us your data, we’ll pay you, or provide some other kind of compensation. If we can actually build systems that provide benefit to the community while they’re contributing, that can be a much more organic solution to this type of problem.

Host: Okay… if you’re approaching this from a data-driven perspective, and data is scarce, what’s the biggest problem you face in your research right now?

Danielle Bragg: Well, I would say one of the biggest challenges is dealing with this data scarcity and figuring out how to collect data in these environments actually presents a host of really rich research problems to work on. You can be really creative in designing systems that incentivize people to participate and provide benefit while also collecting data to then train other models and provide other types of services.

Host: Well, let’s go upstream for a second and talk about what kinds of models you want to provide that you would need this data for. So, what kinds of, sort of, top-level applications or solutions are you aiming for?

Danielle Bragg: Yeah, so within the sign language space, the dream, in some sense, would be to provide end-to-end translation between, for example, English and American Sign Language, and that translation needs to be bi-directional, right? So, it’s not enough to just recognize signs and translate that into English. We also need to let the Deaf person know what, you know, people speaking in English around them are saying. So, we need to translate from English to American Sign Language as well. And recently, there have been some advances in deep learning and convolutional neural nets, in particular, that seem promising in this space, but it’s important to note that any technical solution would be dictated by the needs of the Deaf community and would not be a replacement for human interpreters.

(music plays)

Host: Let’s talk about what you call sign language computation, which is sort of an umbrella term for all the research going on here. Give us an overview of the current state-of-the-art for sign language computation and then – and this is going to be a multi-part question so I will keep bringing us back, making sure we cover everything – talk about the biggest challenges you face in five areas that you identify as datasets (which we’ve sort of already talked about)recognition in computer vision; modeling and NLP; avatars and graphics; and then UI/UX design. That’s a lot to unpack. If we get lost, I’ll bring us back, but let’s start with the stateoftheart of sign language computation.

Danielle Bragg: Sure. So that breakdown into those five groups is really helpful for thinking about this space. So those five areas are really needed for developing end-to-end, bi-directional translation. So, first, we’ll talk about datasets. Existing sign language datasets are primarily in video format and there are a number of different ways that people have tried to collect these videos. You can curate videos from professional interpreters. You can try to scrape different online resources, but these are all limited in some way. In particular, the diversity of the signers in the videos, how many Deaf fluent signers you get as opposed to students or professional interpreters is also limited, often, and just the sheer size of the dataset is also very limited. So, to put that last problem in context, for speech corpuses, we typically have datasets between five million words and one billion words large, and for sign language datasets, the largest datasets we have are less a hundred thousand signs, total. So that’s a very large difference in how much data we have and if you think about the history of speech recognition, how long it took them to get to where they are today, and how much difference having all that data has made, that might put into context for you how hard this is.

Host: Okay, so if we’re talking about datasets being limited and you’re looking for robust machine learning models to help get to a robust sign language computation application, how do the other things play in? You mentioned recognition and computer vision. Let’s talk about that for a second.

Danielle Bragg: Yeah, so in the space of recognition and computer vision for sign language recognition, it’s a pretty young field dating back to the ‘80s when people used hard wired circuits and rule-based approaches. For example, fitted to gloves that had little sensors in them. Those types of systems are limited in how well they work. In addition to the technical constraints, gloves also have other problems, so if you’re using gloves for recognition, you’re missing a lot of important grammar information that is on the face, for example, and you’re asking someone to carry around gloves and put them on all the time and they also don’t provide this bi-directional translation that’s really needed to have a conversation, right? If you’re wearing gloves and signing, maybe some microphone can speak out what you’re saying, but then if someone talks back to you, you have no idea what they’re saying. So, it’s a very incomplete solution. But for technical reasons, people started out with those types of approaches. More recently, advances in neural networks, for example, CNNs and hybrid models that pull together information from different types of models, have been promising, but we’re still operating in this data-limited environment so we don’t actually know how well those models might perform given enough data.

Host: All right, so the recognition and computer vision state-of-the-art isn’t very good state-of-the-art is what you’re saying…

Danielle Bragg: Yeah, basically.

Host: And so, the challenge for researchers there is, what could we do instead, or how could we augment or advance what we’ve done in these areas with new tools? New approaches?

Danielle Bragg: I mean, yeah, people are playing around with different types of models. People have also tried to be clever with pulling together multiple datasets, for example, or tuning parameters in certain ways, but ultimately, my intuition is that we really need more data. Once we have more data, we can figure out how to finesse the models, but we don’t even know how far the models can take us right now because we don’t have the data to fully try them out.

Host: All right, well, I want to get back to how you’re going to go about getting data because we had a really interesting conversation about that a couple days ago, but let’s continue to unpack these five areas. The next one we talked about was modeling and NLP, natural language processing. How does that play into this?

Danielle Bragg: Yeah, so modeling and NLP is very important for figuring out how to translate and how to do other interesting computations with sign language. These types of approaches have traditionally been designed for spoken and written languages, which introduces certain difficulties. For example, there are certain assumptions with written and spoken languages, in particular that one sound happens at a time, but in sign languages, one movement doesn’t always happen at a time. You can have multiple things going on at the same time and some of these models don’t allow for those types of complexities that a sign language might have. Another complexity is that the use of space can be contextual in sign languages. So sometimes, if you point to the right of you, you might be referring to yourself at home. In another point, while you’re talking to someone, you could reestablish that area to mean yourself at the coffee shop. And so, we need to have contextual models that can recognize these types of nuances and the models built for speech don’t account for these types of complexities.

Host: Right.

Danielle Bragg: So, we may need new types of models.

Host: Okay.

Danielle Bragg: Another big problem in this space is a lack of annotation. So even if we have videos of people signing, we often don’t have written annotations of what is actually being signed and a lot of the NLP techniques, for example, really rely on annotations that computers can process in order to work.

Host: Okay. These are huge challenges. Well, let’s talk about avatars and graphics as another challenge in this field.

Danielle Bragg: Yeah, so avatars and graphics are needed to render content in a sign language. So, we’ve talked about this bi-directional translation that would be great to facilitate and, in moving from English to ASL, for example, you need some kind of rendering of the signed content, and avatars and computer graphics provide a nice way to do that. The process of creating an avatar is actually really complex and, right at the moment, a human is needed to intervene at basically every step of the way, so we have a lot of work to do in this space as well, but typically, the process starts with some kind of annotated script that gets translated into a motion plan for the avatar. A number of parameters, then, need to be tuned. For example, speed within individual signed units or cross-signed units, and then finally, we need some animation software to actually render the avatar. I should also mention that avatars have had mixed reception among the Deaf community, especially if they are not very realistic looking, they can be kind of disturbing to look at, so there are lots of challenges in this space.

Host: Are they sophisticated enough to even get to the uncanny valley or are they just lame?

Danielle Bragg: Ahhh, I mean, it probably depends on the avatar…!

Host: I suppose. Well, either way it sounds expensive and cumbersome to have this be an approach that’s viable.

Danielle Bragg: Yeah, it is difficult. I mean, there are some companies and research groups that have tried to make avatars and they typically spend a lot of money collecting very high quality examples of signs that they can later string together in the avatar, but even with that, you need a human to come in and manage and clean up whatever is generated.

Host: Well, let’s talk about UI/UX design and that interface between Deaf signers and computation. What are the challenges there?

Danielle Bragg: So, I think UI/UX design is another really rich space for exploration and development. In particular because sign language is a different modality from written and spoken languages, but again, a big challenge here is designing interfaces that will be useful despite our lack of data and despite the limitations that our current technologies have.

Host: Mmm-hmm.

Danielle Bragg: So, figuring out ways to provide a human-in-the-loop solution, or provide results that are good enough that can then learn from users as they’re using the system or other types of creative ways to support users becomes a really rich space for design and exploration.

Host: Right. Right. So, there’s a lot of opportunity for research in this area and probably a lot of room for other researchers to join the efforts

Danielle Bragg: Yeah, definitely. I think it’s also one of the most interdisciplinary spaces that I’ve come across, right? You need people who are experts in Deaf studies and linguistics and HCI and machine learning. You need all of these areas to come together to make something that’s really going to be useful for the community.

Host: Tell me a little bit more about your ideas and approaches for actually gathering data. You’ve alluded to some of the difficulties in the existing datasets. So how might you broaden your data collection?

Danielle Bragg: Yeah, so that’s a great question. I can give an example of one system that I’ve been working on that both provides benefit to the community and collects useful data at the same time. So, one project I’ve been working on, it was started when I was a PhD student at University of Washington with my former advisor Richard Ladner there, is to build an ASL dictionary. So, if you come across a sign that you don’t know and you want to look it up, that can be really challenging. Existing search engines and search interfaces are typically designed around English, but it’s really hard to describe a sign in English and we also just don’t have videos indexed that way, right? Like, what would your query look like? Right hand moves right, left hand moves up, you know?

Host: Right.

Danielle Bragg: Two fingers extended. We just don’t support those types of queries, and also, searching by gesture recognition also doesn’t work very well because we don’t really have the capabilities working accurately yet. So, we designed a feature-based dictionary where you can select a set of features that describe the sign that you’re trying to look up, for example, different types of hand shapes or movements, and then we match that against a database of past queries that we have for signs in the database and sort the results based on similarity to past queries in order to give you a good result. And in this way, while you’re using the dictionary to look up a sign, you’re actually providing data that can be used to improve the models and improve results for people in the future.

Host: Right.

Danielle Bragg: So, these types of systems, where users are providing data that will actually improve the system going forward, can be a really nice way to jump-start this problem of data scarcity.

Host: Right. And you talked earlier about existing datasets, which involve maybe some videos that have been taken from somebody giving a speech and having a Deaf signer in front or beside and, are those all public domain? Are you able to use those kinds of things that exist and just pull them in or is there a problem there as well?

Danielle Bragg: Yeah, that’s a great question too. So, some datasets are public domain, some are not. So, collecting sign language data is very expensive and not only in terms of, you know, dollars spent, but also in terms of time and resources, and so groups who collect datasets may be disincentivized to share them.

Host: Right.

Danielle Bragg: That could be research groups who invested a lot in collecting a dataset, but it could also be companies who are trying to build a translation software and they’re trying to out-do their competitors so…

Host: Right.

Danielle Bragg: …there are a lot of datasets that are not publicly available. We don’t actually know exactly how big those datasets are because they’re not public, but it seems like they’re pretty small based on the quality of existing translation and recognition systems.

(music plays)

Host: All right, well, let’s move on to a recent paper that you published. In fact, it was in 2019 and it won the best paper award at ASSETS and you addressed many of the things we’ve talked about, but the paper also addresses the problem of silos and how to bring separate portions of the sign language processing pipeline together. So, talk about the questions you asked in this paper and the resulting answers and calls to actions. It was called, Sign language recognition, generation and translation: an interdisciplinary perspective.

Danielle Bragg: Yeah, so, we were trying to answer three main research questions. First is, what is the current state-of-the-art of sign language technology and processing? Second, what are the biggest challenges facing the field? And then third, what calls to action are there for people working in this area?

Host: Mmm-hmm.

Danielle Bragg: And, as you mentioned, this is a very interdisciplinary area and we need people working together across diverse disciplines and so we organized a large, interdisciplinary workshop in February of 2019. We invited a variety of academics working in a variety of fields. We also had internal attendees who were employees at Microsoft, and in particular, we made sure to invite members of the Deaf community because their perspective is key, and they led a variety of panels and portions of the day. And as a group, we discussed the five main areas that we have already talked about…

Host: Right.

Danielle Bragg: …and kind of summarized, you know, what is the state-of-the-art, what are the challenges, and where do we go from here?

Host: Mmm-hmm.

Danielle Bragg: So that paper was presenting our results.

Host: All right, sdrill in a little bit on this siloed approach and what some of those problems are as you work towards a robust application in this arena.

Danielle Bragg: So, I think we touched on this a little bit earlier when I was talking about some of the challenges in using NLP techniques for sign language computation. A lot of the NLP techniques are developed with spoken languages in mind and so they don’t really handle all of the complexities of sign languages. So that’s an example of a situation where we really need linguists or Deaf culture experts combining with natural language processing experts in order to create models that actually will apply to sign languages, right? If you only have NLP people who are hearing, who use English building these models, you’re going to have very English-centric models as a result that don’t work well for sign languages and, you know, the people probably don’t realize that they don’t apply.

Host: Right. And which gets to the question of why don’t you just use English? Well, because it’s a different language, right?

Danielle Bragg: Right, exactly. Yeah, American Sign Language is a completely different language from English. It’s not “signed English” so if you know English that doesn’t mean that you can understand ASL easily and if you know ASL that does not mean that you can necessarily read English easily either. So, that’s a point that, I think, not a lot of people recognize, that English, in a lot of cases, is a person’s second language. They can grow up signing in the home and then learn English as a second language at school and as, you know, anyone listening who has learned a second language knows, it’s not as comfortable most of the time.

Host: Let’s talk about your tool belt for a minute. You’ve situated yourself at the intersection of AI and HCI, leaning more towards HCI, and much of your research is building systems, but you still face some of the big challenges with enough data and good enough data as we’ve talked about. Talk about the research methodologies and technical tools you’re using and how you’re working to tackle the challenges that you face.

Danielle Bragg: Yeah, so as you mentioned, I do do a lot of systems building. I do a lot of website building, full stack engineering, and then there’s a whole set of skills that go into that. As far as data collection goes, I’ve used a lot of crowdsourcing, whether that be on an existing platform like Mechanical Turk, or building a new platform to collect data in other ways. We also incorporate a lot of applied machine learning techniques in the dictionary, for example, that I was explaining. Our back end is powered by Latent Semantic Analysis, which basically does a big dimension reduction on the feature space to figure out which dimensions are actually meaningful in completing the search. I also do a lot of user studies, interacting with users in a variety of ways and engage in a number of design practices that incorporate key stakeholders. So, in particular, securing research partners who are deaf, but also engaging in participatory design and other ways to engage with the community. I like a combination of qualitative and quantitative work as, I guess that’s kind of a catch phrase these days, but…

Host: Right, right, right. Let’s project a bit and speculate how the work you’re doing for the Deaf community might have a positive, if unintended, impact on the broader population. Some people call this the “curb-cut effect” where something that was supposed to help somebody, ended up helping everybody, or populations they didn’t expect to help. You know, the curb cut was for wheelchairs turned out to be great for strollers and cyclists and people rolling suitcases and everything else. So, do you have any thoughts on other application arenas that face similar challenges to sign language computation? One thing that comes to mind is dance annotation. I have a background in that and it’s full-body expression as well.

Danielle Bragg: It’s funny that you mention dance because there are a lot of parallels there. In particular, sign languages actually don’t have a widely accepted written form and that causes a lot of the barriers to using our text-based interfaces in a sign language, and a lot of the same problems apply to dancers, right? If you’re a dancer or choreographer and you want to write down the dance that you are coming up with, or the dance that you’re dancing, that can be really hard, and as a result, there’s a woman who came up with a system called Dance Writing and that system has actually been adapted to create a written form for sign languages called Sign Writing. So there definitely are a lot of similarities between, you know, dance and signing, and I would say, more generally, any gesture-based human interaction has a good amount of overlap with sign language research. So, gesture recognition in particular has a lot of similarities to sign recognition. I would say that gesture recognition is actually a simpler problem in many ways because there’s no grammatical structures to understand and the context doesn’t change the meaning of a gesture the way it does to a sign, in many cases.

Host: Right. So, gestures might be for a person on a runway who’s bringing the plane in or something, or what you would do with cyclists and what those gestures mean, and they’re pretty simple and straightforward…

Danielle Bragg: Yeah, exactly. Or you could think about interacting with a computer through a simple set of gestures or…

Host: Hmmm.

Danielle Bragg: …for an X-Box. I know there have also been research projects to try to support people learning how to play a particular sport or do yoga more effectively by detecting gestures that the person is making and helping to correct them. Or how you learn a musical instrument, for example, the types of gestures that you make, make a big difference. So, I think there’s a lot of overlap with other areas where human movement or gesture is involved.

Host: Danielle, we’ve talked about what gets you up in the morning, but now I have to ask what keeps you up at night? And you could call this the “what could possibly go wrong” question. Do you have any concerns about the work you’re doing and if so, how are you addressing them up-front rather than post-deployment?

Danielle Bragg: In all the projects that I do related to sign language, I really do my best to include perspectives from people who are deaf and give Deaf people a platform to be heard and to participate and expand their careers, but that is something that I consciously think about and sometimes worry about. I personally am still learning about Deaf culture and the Deaf experience. I don’t have a lifetime of experience in this space. I’ve taken some ASL classes, but I’m not fluent. I’m also not deaf and I don’t have the Deaf lived experience so it’s particularly important to include those perspectives in the work that I’m doing and I have a number of really wonderful collaborators at Gallaudet University, at Boston University, at Rochester Institute of Technology, and a number of other places. So that’s what I’m doing to try to help with this, you know, with this concern.

Host: Right. Well, what about data collection and privacy?

Danielle Bragg: That’s a great question as well. I do worry about that. In particular, for sign language data, it’s a very personal form of data because the person’s face is in it, their body is in it, the background, you know, if it’s their home or their workplace or wherever they’re signing is in it. So, there are a lot of privacy concerns involved in this space. I’ve done some preliminary work exploring how we might be able to impose certain types of filters on videos of people signing. You know, for example, blurring out their face or replacing their face with an avatar face. Of course, if the movement is still there, if you know the person very well, you might still be able to recognize them just from the types of movements that they’re making, but I think there are things we can do to improve privacy at least, and it seems like a very interesting, rich space to work in.

Host: Well, it’s story time. What got young Danielle Bragg interested in computer science and what path did she take to follow her dreams and end up working at Microsoft Research New England?

Danielle Bragg: So, in my undergrad, I studied applied math. Growing up, math was always my favorite subject, and I still enjoy mathematical oriented work. Towards the end of my undergrad I didn’t know exactly what I wanted to do, but I wanted to do something practical, so I decided to go to grad school for computer science. It seemed like a practical decision. But in grad school I was really searching for projects that had some human impact and that hopefully were making a positive impact in the world and that’s where I really got interested in accessibility. So, I met my former PhD advisor, Richard Ladner, at University of Washington, and he introduced me to the field of accessibility. He got me taking ASL classes and working on some problems in this space that I’m still working on today.

Host: So, did you just fall into a job at Microsoft Research or were you an intern? Is that the typical pathway to the job, or how did that happen?

Danielle Bragg: I did intern at Microsoft. I’ve interned at Microsoft three times, actually. Once in the Bing Search group and then two times as a research intern with Adam Kalai in the New England lab, and then I did a postdoc at the New England lab for two years, and now I am a full-time researcher in the lab so I can’t… I can’t go anywhere else! I’m forever a New England researcher.

Host: Awesome. What’s something we don’t know or might not suspect about you? Maybe a character trait, a life event, a hobby/side quest and how has it impacted your life and career?

Danielle Bragg: So, I’ve spent a lot of time training in classical music performance, actually. I played the bassoon pretty seriously through college and considered being a professional musician at that point. I studied with someone in The Boston Symphony, I went to summer music festivals, which is a thing that pre-professional musicians do in the summers and I still have a lot of friends and acquaintances in orchestras and playing chamber music. And I would say music really added a lot of richness to my life. In addition to my love of music, I think my professional training actually had a lot in common with my research training. So training to be a professional musician takes a lot of practice and dedication and it’s more of an apprentice model so you usually study closely with one teacher at a time and they really teach you, you know, how to play, how to make reeds, if your instrument requires reed making, and actually being trained to do research is quite similar in a lot of ways, right? You have your PhD advisor who you work closely with and you learn from doing research alongside them. So, I didn’t plan it, originally, but I think that, you know, being trained as a classical musician probably actually helped me a lot with training to do research.

Host: I love that. You know, there’s such a huge connection between music and math, by the way, that so many researchers I’ve talked to have had that musical interest as well, but not in the classical, bassoon-playing category. So, you’re unique in that.

Danielle Bragg: Yeah, bassoon is a, a different one.

Host: I grew up… my mom had a record of Peter and the Wolf and all the different animals were represented by the different instruments and I remember the bassoon, but I can’t remember the animal it was associated with. I’ll look it up after we’re done.

Danielle Bragg: I think it’s the grandfather, but I could be wrong.

Host: Well, as we close, and I’m sad to close, as we close…

Danielle Bragg: Me too.

Host: This has been so much fun! I’ve taken to asking what the world might look like if you’re wildly successful and some people frame this in terms of solving problems that would impact millions or billions of people, but I think sometimes the goal is less grandiose and the impact might be more meaningful to a smaller population. So, at the end of your career, what do you hope to have accomplished in your field and how would you like life to be different because of your research?

Danielle Bragg: Well, it might sound a little cheesy or cliché, but I really hope to leave the world a little bit better than it was when I started out, and in my career, I hope I’ll have helped people get access to information that they may not have had access to beforehand. I think education is so key to so many things. You know, not only degrees that you get from schools, but your personal developments, different types of skill development, or just general understanding of the world. And I think if you don’t have access to information, that’s really, really a problem, right? At least if you have access to the information, you can decide whether you want to consume it, you can decide what you want to do with it, and you have the possibility of learning or advancing yourself, but if you don’t even have access then, you know, what can you do? So, a lot of my work is focused on increasing access to people who use languages that are not often served or supported, or have difficulty accessing information in different ways.

Host: Danielle Bragg, this has been really great. I have learned so much from you and I’m so inspired by the work you’re doing. Thank you so much for coming on the podcast today.

Danielle Bragg: Yeah, thank you.

(music plays)

To learn more about Dr. Danielle Bragg, and the latest in accessibility research efforts, visit Microsoft.com/research

And for the record, it WAS the grandfather!

Lire la suite

Voir tous les podcasts