Keeping an Eye on AI with Dr. Kate Crawford

已发布

Dr. Kate Crawford – Principal Researcher

Episode 14, February 28, 2018

Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. Enter Dr. Kate Crawford, a principal researcher at the New York City lab of Microsoft Research. Dr. Crawford, along with an illustrious group of colleagues in computer science, engineering, social science, business and law, has dedicated her research to addressing the social implications of AI, including big topics like bias, labor and automation, rights and liberties, and ethics and governance.

Today, Dr. Crawford talks about both the promises and the problems of AI; why— when it comes to data – bigger isn’t necessarily better; and how – even in an era of increasingly complex technological advances – we can adopt AI design principles that empower people to shape their technical tools in ways they’d like to use them most.

Related:


Transcript

Kate Crawford: There is no quick technical fix to bias. It’s really tempting to want to think that there’s going to be some type of silver-bullet solution that we can just tweak our algorithms or, you know, use different sorts of training data sets, or try to boost signal in particular ways. The problem of this is that it really doesn’t look to the deep social and historical issues that human data is made from.

(music)

Host: You’re listening to the Microsoft Research podcast. A show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host Gretchen Huizinga.

Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. Enter Dr. Kate Crawford, a principal researcher at the New York City lab of Microsoft Research. Dr. Crawford, along with an illustrious group of colleagues in computer science, engineering, social science, business and law, has dedicated her research to addressing the social implications of AI, including big topics like bias, labor and automation, rights and liberties and ethics and governance.

Today, Dr. Crawford talks about both the promises and the problems of AI, why, when it comes to data, bigger isn’t necessarily better, and how even in an era of increasingly complex technological advances, we can adopt AI design principles that empower people to shape their technical tools in ways they would like to use them most.

That and much more on this episode of the Microsoft Research podcast.

(music)

Host: Welcome Kate Crawford to the podcast. Great to have you here with us from New York City.

Kate Crawford: Thank you so much Gretchen. It’s a pleasure to be here.

Host: So, you’re in the, as we’ve said, New York City lab of Microsoft Research. What goes on in the Big Apple?

Kate Crawford: Ahhh! So much. It’s a city that doesn’t sleep when research is concerned. Look, there’s so much going on here. It is actually — we a have an abundance of exciting research initiatives. Obviously here, we have the Microsoft Research New York office. In here, we have really sort of four groups writ large. We have a really fantastic machine learning group. We have a computational social science group. We have an algorithmic economics group. And we have the FATE group, which is a group that I co-founded with my colleagues Hannah Wallach and Fernando Diaz, and it stands for fairness, accountability, transparency and ethics. And that group is coming into its third year now. We really, early on, saw that there were going to be some real concerns around insuring that large-scale, decision-making systems were producing fair results and fair predictions, and we also needed to start thinking much more about accountability for decision-making; particularly in relation to machine learning and artificial intelligence. And, of course, ethics, which is a very board term that can mean all sorts of things. For us, it means really looking at how people are affected by a whole range of technologies that are now touching our lives, be that in criminal justice or education or in healthcare. So that was the reason we formed the FATE group here in New York. In addition to what’s happening here at MSR NYC, there’re also other groups. There’s Data and Society which is headed by danah boyd, just around the corner from this building. And then at NYU, we have a brand-new research institute that I co-founded with Meredith Whitaker, called the AI Now Institute. And that’s the first university institute to focus on the social implications of artificial intelligence. So, there’s a lot going on in New York right now.

Host: I’ll say. You know, I want to ask you some specific questions about a couple of those things that you mentioned, but drill in a little bit on what computational social science means.

Kate Crawford: Yeah. That group is headed by Duncan Watts and realistically, they are looking at large-scale data to try and make sense of particular types of patterns. So, what can you learn about how a city is working when you look at traffic flows, for example. What are the ways in which you could contextualize people’s search information in order to give them different types of data that could help. I mean there are lots of things that the CSS group does here in New York.

Host: Well let’s go back to FATE, because that’s kind of a big interest of yours right now. Fairness, accountability, transparency and ethics.

Kate Crawford: Yeah.

Host: And you sort of gave us an overview of why you started the group and it’s fairly nascent. Is it having an impact already? Is it having any impact that you hoped for in the high-tech industry?

Kate Crawford: Absolutely. We’ve been really thrilled. I mean, even though it’s only 3 years old, in some ways, that’s actually quite established for thinking about these issues. I’ve been as a researcher focusing on questions around fairness and due-process in sort of large-scale data systems and machine learning for over ten years. But it’s really only been in the last 18 months or so that we’ve seen this huge uptick in interest across both academia and industry. So, we’re starting to see algorithmic accountability groups emerge in kind of the key technology companies here in the US. We also have conferences like the FAT/ML conference. Which stands for fairness, accountability and transparency, you guessed it. Which has now is now become a blockbuster hit. It actually taking place here in New York City in just two weeks to a full house and an extensive waiting list. So, it’s really actually taking off. But we are also seeing it really start to have impact within Microsoft itself. I mean, here, Hannah and myself for example, and various product groups who are coming up with questions where they think a system might produce say a discriminatory output, what should they do? Or they might have concerns with the data that a system has been trained on. What sorts of questions might you want to ask? Including all the way through to, what are the big policy questions that we need to ask here. And we’re doing things like speaking to the European Commission, to the UN, etc. So, um, for a small group, it’s just four of us, I think it’s already having a pretty outsized impact.

Host: Interesting that you say you’ve been doing this for about ten years, which I think is surprising to a lot of people, with machine learning coming to the forefront now. Why, do you think, only in the last 18 months have we seen an uptick?

Kate Crawford: I think it has a lot to do with scale. I mean what’s particularly interesting to me is that just in the last actually about three months, we’ve seen leaders from major technology companies including Satya Nadella, including one of the co-founders of Deep Mind, Mustafa Suleyman, and the head of Google AI, all say that fairness and bias were the key challenges for the next five years. So, it’s gone from being something that was a relatively bespoke concern, shall we say, to becoming front-of-mind for all of the sort of leaders of the major technology companies. And I think the reason why, and certainly the reason that Mustafa gave when he was sort of making his statement about this, is because if you have a system that’s producing a discriminatory result say, for example, in a search, or in ads, that is affecting a billion to two billion users a day, easily. So, at that scale, that can be extremely costly and extremely dangerous for the people who are subject to those decisions. So, I think, in that sense, it’s really a question of how many people can be affected by these sorts of systems, given their truly vast reach.

(music)

Host: You recently gave a talk at the NIPS conference — Neural Information Processing Systems. And the topic was called The Trouble with Bias. What is the trouble with bias, Kate?

Kate Crawford: Well, NIPS is an amazing conference, I should say, out of the gate. It was a real honor to speak there. It is, in fact, the largest machine learning conference in the world. I was speaking in this room as the sort of opening keynote to around 8,000 people, so it kind of felt like being at a Van Halen concert or something extremely stadium rock. It was extraordinary to see. But I was really interested to talk about what’s been happening with this concept of bias. And I looked at, particularly, some of the cases that we’ve seen emerge that have been interesting to researchers. So, for example, everything from, if you do a search in Bing or in Google, for “CEO” and you do an image search, you’ll find a lot of images of white men in suits. And, depending on which way the algorithm is blowing that day, the first woman that you’ll see is, quite often, CEO Barbie. So, it raises very troubling questions about how we represent a gender. And then of course, there are a whole series of studies coming out now, looking at how we represent race. And here we can look at anything from the way that a training data set like Labeled Faces in the Wild is around 79% male and 84% white, so they’re the sorts of populations that systems that are trained on Labeled Faces in the Wild will perform best for. So, that’s in the space of facial recognition. But what I was talking about at NIPS was, really sharing brand new research that I’ve been working on with Solon Barocas at Cornell and Hannah Wallach here at Microsoft Research and Aaron Shapiro at U Penn, where we’re looking at the way that bias in computer science has traditionally been looked at. And basically, the way that it’s been studied so far, we did a big analysis of all of the papers, is that it really looks at the types of harms that cause a type of economic, or what we call allocative, harm. So, it means that a system is determined to be biased if you don’t get a job or if it decides that you don’t get bail or if you can’t get access to credit. But there’s a whole range of other harms which we call representational harms, which doesn’t mean that you don’t necessarily get a job, but it might mean that it’s just a denigration of a particular category or community.

Host: Let’s talk for a minute about the data sets that actually train our AI models. As far as the data goes, we’re often told that bigger is better. But what you just said, suggests this might not be the case. Can you explain how bigger data isn’t necessarily better data?

Kate Crawford: Yeah, I mean there has been I think this perception, for some time now, that the more data we have, the more representative it is. But that’s simply not the case. If you are over-sampling from a particular population, we could think about here, you know, say, for example, Twitter data. There was a period, so five years ago, when people really thought Twitter data was going to be the best way to understand what was happening during a natural disaster or humanitarian crisis. But it was very, very clear, from early on, and certainly in some work that I was doing many years ago, that showed just how skewed the demographics are of people who were using Twitter at that time, let alone people who have access to smartphones. So, depending where you in the world, that means it’s not particularly a very reliable signal. So, even if you have hundreds of thousands of data points, if all of those data points are being drawn from affluent populations, who live in urban centers, then you’re only seeing one part of the picture and it’s going to be extremely difficult for you to extrapolate from that. And there are very, sort of, related problems happening with training data right now. Training data is you know, often comes from sets that have hundreds of thousands, if not millions, of particular items within them. But if they are very particularly sampled from an already skewed pool, then that’s going to still produce results. I mean, there have been some really, you know, interesting examples I think that we can look at here. And they come from all sorts of interesting places. In the case of you know criminal justice, there’s been a lot of controversy around the use of the Compass Risk Assessment System, which essentially tries to predict a risk score for whether or not somebody will re-offend as a violent criminal. But of course, it’s, you know, trained on data that is historical crime data. And again, many criminologists and social scientists point to the fact that there is a long history of racist policing in the US. So, if you’re already coming from a baseline where people of color and low-income communities are far more likely to be stopped by the police, to be arrested by the police, and be generally surveilled, they will be over-represented in those samples. And then if you’re training a system on that data, how do you actually accommodate that? These are really hard questions and I think what I’ve certainly learned from looking at these questions as a researcher is that there is no quick technical fix to bias. It’s really tempting to want to think that there’s going to be some type of silver-bullet solution, that we can just tweak our algorithms or use different sorts of training data sets or try to boost signal in particular ways. The problem of this is that it really doesn’t look to the deep social and historical issues that human data is made from. That essentially, data reflects our human history. And our human history itself has many, many instances of bias. So, I think what’s so interesting, and when I talk about the trouble with bias, is that I think, particularly in computer science, we tend to scope the problem too narrowly and to think of it purely as something we can address technically. I think it’s incredibly important that we understand it as a socio-technical problem. That means addressing these sorts of issues very much in an interdisciplinary context. So, if you’re designing a system that is doing anything to do with criminal justice system, you should be working side-by-side with people who have been doing the most important work in those fields. This pertains to every one of the domains of healthcare, education, criminal justice, policing, you name it. We have area experts who I think have just been, to this point, kind of left out of those development cycles, and we really need them in the room.

Host: Well, I think there’s been a misconception, first of all that data doesn’t lie. And it can if you have you know, specific populations that you’re only representing. But also, this idea of the sort of “separation of church and state” between technology and the humanities, or technology and social science. And so, what I’m hearing, not just from you but over and over, is we have to start talking, cross-pollinating, silo-busting, whatever you want to call it, to solve some of these bigger problems, yeah?

Kate Crawford: Absolutely. I couldn’t agree more. I mean this was really one of the really big motivations behind establishing the AI Now Institute at NYU, is that we realize we really needed to create a university center that was inviting people from all disciplines to come and collaborate on these sorts of questions. And particularly in terms of issues around, you know, bias and fairness. But even more broadly, in terms of things like labor and automation and the future of work, right through to what happens when we start applying machine learning to critical infrastructure, like the power grid or to hospitals. In order to answer any of those questions, you kind of need a really deep bench of people from very different disciplines and we ended up trying to address that by working with six different faculty incentives to establish AI Now. So, it’s a co-production if you will between computer science, engineering, social science, business and law, as well as the Center for Data Science, really because I think you need all those people to really make it work.

(music)

Host: Let’s switch to another topic for a bit. And that is this concept of autonomous experimentation. With the proliferation of sensors and massive amounts of data gathering, people may not be aware much of the time that they are in fact data and not necessarily with their knowledge or consent. Can you speak to that?

Kate Crawford: Oh, absolutely. I mean I should say the autonomous experimentation research that we’ve been doing here is very much a collaborative project which has been work I’ve been doing alongside people like Hannah Wallach and Fernando Diaz and Sarah Bird. And it’s been really fascinating to, essentially look at how a series of new systems that are really being deployed far more widely than people realize, are actually doing forms of automated experimentation in real-time. And we were looking at a range of systems including, say, what happens when you use a traffic app. You know, just using a mapping app that’s saying where is the traffic bad? Where is it good? The way these systems are working is that they’re essentially constantly running experiments, large-scale experiments, often on hundreds of thousands of people, simultaneously. And this can be good. In many cases, it’s for things like you know, load balancing where people go. If we all got the same directions to go to, you know, say, downtown Manhattan from uptown Manhattan, then the roads would be unusable. They would be completely congested. So, you kind of have to load-balance between different parts. But what that also means is that some people will always be allocated to the less ideal condition set of that experiment and some of that will be allocated to the ideal condition set. Which means that you know, you might be getting the fastest way to get to work that day, and somebody else will be getting a slightly slower way. And now, this sounds absolutely fine when you’re thinking about you know, perhaps if you’re just going to work in a few minutes, either side isn’t going to ruin your day. But what if you’re going to hospital? What if you have a sick kid in the back of your car and it’s really urgent that you get to a hospital. How can you say, “No, I really don’t want to be allocated into the less ideal group”? Or this could happen as well in health apps. You know, how can you indicate that, in experimentation, which has been used to try and, say, make you jog more or do more exercise, that you’re somebody who’s recovering from injury or somebody who has a heart condition. These are the sorts of issues that really indicated to us that it’s important that we start doing more work on feedback mechanisms. On what are the sorts of consent mechanisms that we can think about when people are being allocated into experiments, often without their knowledge. This is very common now. We’re kind of getting used to the fact that AB experiments at-scale really began with search. But now they’re really moving and submerging into much more intimate systems that guide our everyday lives from anything that you’re using on your phone about your health or your engagements with friends, right through to you know, how you spend your time. So, how do we start to think about the consent mechanisms around experimentation? That was the real motivation behind those series of ongoing studies.

Host: Well and it does speak to this dichotomy between self-governance and government regulation. And because we are in kind of a Wild West phase of AI, a lot of things haven’t caught up yet. However, the European Union has the GDPR that does attempt to address some of these issues. What is your thinking on whether we go with our own oversight, “who is watching the watch dog” kind of thing or invite regulations. What’s going on in your mind about that?

Kate Crawford: It’s such a good question. It’s an incredibly complex question and unfortunately there are no easy answers here. I mean, certainly GDPR comes into effect in May this year and I think it’s going to be extraordinarily interesting to see what happens as it begins to be rolled out. Certainly, a lot of the technology companies have been doing a lot of work to think about how their products are going to change in terms of what they’re offering. But it will also be interesting to see if it has flow on effects to other parts of the world like the US and a whole range of other countries that aren’t covered by GDPR.

Host: Well, the interesting thing for me is that say, the European Union has it. It has far-reaching tentacles, right? It’s not just for Europe. It’s for anyone who does business with Europe. And it does as you say represent a very complex question.

Kate Crawford: It does. I mean I think about this a lot in terms of artificial intelligence writ large, and that term can mean many things. So, I’m using it here to really refer to not just sort of machine learning based systems, but a whole range of other technologies that fit under the AI banner. And this is something that is going to have enormous impacts over the next ten years. And there’s a lot of attention being paid to, what are the types of regulatory infrastructures, what are the types of state and corporate pressures on these sectors and how is it going to change the way that people are judged by these systems? As we know China has a social credit score. Some people find this quite a disturbing system, but there are many things in the US that are not dissimilar. So, we’re already moving quite rapidly into a state where these systems are being used to have direct impacts on people’s lives. And I think there’s going to be a lot of questions that we have to ask about how to ensure that that is both ethical and, I think, equitable.

Host: Right. And that is where, interestingly I think, some good research could be happening, both on the technical side and the social science side, as we address these. And all of the sort of expertise and disciplines that you talked that are working in the FATE group.

(music)

Host: So, let’s talk for a second about… there’s so many questions I want to ask you, Kate, and this is just not really enough time. So, I’m coming to New York. I’m going to get into the bad traffic there and come see you.

Kate Crawford: Great.

Host: Listen, the overarching question for me right now is about, how do we take these big, thorny, prickly, questions… issues, and start to address them practically? What are your thoughts on how we can maybe re-assert or re-capture our “agency” in what I’d call a get app world?

Kate Crawford: Yeah, I think that’s a really fascinating question. I mean what’s interesting, of course, is how many systems that touch our lives these days that we don’t even have a screen interface for. So many, many major cities, you’re walking down the street, you know, your face is being recorded. Your emotions are being predicted based on your expressions that day. You know, your geolocation is being tracked through your phone. These are all things that don’t involve any type of permission structure, a screen, or even you being aware that, you know, your data and your movements are being ingested by a particular system. So, I think my concern is that while more granular permission-based structures are possible, urban computing has shifted away from the devices directly in front of us to be embedded throughout architectures, throughout streets, throughout so many systems, that are in many ways invisible to us. And they’re already having a real impact on how people are being assessed, and the sort of impacts that they might experience just walking around the city in a day. So, I think we are coming up with things that would’ve been great to have, and would still be useful in some context, but they don’t resolve, I think, these much bigger questions around, what is agency, when you’re really just a point amongst many, many other data points and being used by a whole range of systems that are in many ways completely opaque to you? I think we need to do a lot more work. And certainly, I would agree with you. I mean, this is an urgent area for research. We just desperately need more people working in these areas, both technically and I think on these more social science perspectives.

Host: You know, it’s funny as you’re speaking, Kate, you actually just identified basically a generation skip, as it were. You know like if a country doesn’t have landline phone infrastructure and goes straight to cell phones, right? And so just when we’re thinking we ought to be more cognizant about giving consent to apps and technologies, you’re bringing up the excellent point that there’s so much going on that we don’t even get asked for consent.

Kate Crawford: Absolutely. And also, I mean, there’s another thing here. I think that we’ve really moved beyond the discussion of, you know, the idea of a single person, you know, looking at an app and deciding whether or not you should allow it to have access to all of your contacts. We’re in a very different state where you could be using an app or system and you’re thinking that it’s for one thing but it’s actually doing something else. I mean, the classic case here is say, you know, sort of, the Pokémon Go craze where you are out to sort of catch little Pokémons in a sort of augmented reality environment in cities. But that became… or was being used to harvest a massive training data set to really generate sort of new maps and geo-locative data. So, people, in some ways, think that they’re doing one thing, but their data is being used to train a completely different type of system that, you know, they may have no idea about. So, I think, again, this idea that we’re even aware of what we’re participating in, I think, has really moved on.

Host: Yeah, again with the idea that we’re on a podcast and no one can see me shaking my head. This does bring up questions about the practical potential solutions that we have. Is there any, from your end, any recommendation, not just about what the problems are, but who should be tackling them and how, I mean…

Kate Crawford: Yeah, absolutely. I’m actually, I mean, for a while, while our conversation does cover some really thorny and I think, you know, quite confronting questions that we’re going to need to contend with as researchers and as an industry, I think that there’s also a lot of hope and there’s a lot that we can do. One of the things that I work on at the AI Now institute is we release an annual State of AI report. And in that report, we make a series of recommendations every year. And in the last report, which just came out a couple of months ago, we really sort of made some very clear and direct recommendations about how to address some of these core problems. One of them is we just think that before releasing an AI system, particularly in a high-stakes domain, so, all of those things we’ve chatted about like, you know, healthcare, criminal justice, education, that companies should be running rigorous pre-release trials to ensure that those systems aren’t going to amplify, you know, errors and biases. And I see that as a fairly basic request. I mean it’s certainly something we expect of pharmaceutical drugs before they go on the market. That they’ve been tested, that they won’t cause harm. The same is true of, you know, consumer devices. Ideally, they don’t blow up, you know, with some exceptions. You really want to make sure that there’s some pretty extensive testing and trials. And then trials that can be sort of publicly-shared. That we can say that we have assessed that this system, you know, doesn’t produce disparate impact on different types of communities. I think it’s also really important that another recommendation, another thing we can do is, you know, after releasing these sorts of complex, often, you know, algorithmically-driven systems, um, should I say, that we just continue to monitor their use across different contexts and communities. Often, a type of system is released, and it’s assumed that it’s just going to work for everybody for an indefinite period of time. It’s just not the case. I was looking at a study recently that suggested that medical health data has a 4-month half-life. That’s 4 months before it becomes out-of-date, before the data in that training data set is actually going to become contradicted by things that have happened after the fact. So, we need to keep thinking about, how long is a system relevant? How might it be performing differently for different communities? So, this is another recommendation that we feel very strongly about. But there are many more, and if people are interested in particular types of research questions or concrete steps that we can take moving forward, the AI Now 2017 report has a lot of those in there for more reading.

Host: Yeah. I would actually encourage our listeners to get it, and read it, because it is fascinating, and it addresses things from a, sort of ,an upstream point-of-view, which is I think where we need to go. We’re a bit downstream. Because we’ve released the Kraken, in many ways, of AI. It’s like suddenly they’re… in fact, back to the original… you know the last 18 months, I think part of the reason, maybe, that we’re seeing an increased interest is because people are realizing, “Hey this is happening, and it’s happening to me, and now I care!” Prior to that, I just kind of went along and didn’t pay attention. So, attention is a good thing. Let me ask you this. You mentioned a concept from architecture called “desire lines” in a talk that I heard, and I loved it in the sense that it’s like shortcuts in public spaces, where, you know, you find a path on the grass just because people don’t want to go around on the concrete. And I would say things like Napster and BitTorrent are sort of examples of technical desire lines, where it’s like I’m going to go around. Is there anything in that space that’s sort of happening now from a grassroots perspective, where we’re sort of “take back the night” kind of thing, in the AI world?

Kate Crawford: Yes, I love this. You know, I think there is. There are some really nice examples of very small grassroots efforts that have been extremely successful in doing things like essentially, creating spaces of anonymity where you’re much more protected from your data being harvested, for reasons that you may or may not have agreed to. Even things like the private messaging service signal, which again was created really by you know, one guy and a few of his friends. Moxie Marlinspike has been I think very much a champion that individuals can you know, create these types of systems that can create more freedom and more agency for people. And there are others, too. I think it’s going to be interesting to think about how that will happen in an AI-driven world. And even, I think, in the major technology companies, I think it’s really important to create ways that people can start to shape these tools in the way that they’d most like to use them, and to give some space for sort of desire lines where you can say, well I don’t actually want my AI system to work this way, or I don’t want it to have access to this information. But how can I train it to do what I want it to do, to make it an advocate for me. So, rather than serving the interests of, you know, just a particular company, it’s really there as my agent, as someone who’s going to look out for things that I care about. These are things that we can think of as design principles. And certainly, something that people do talk about a lot at Microsoft Research. And it’s something that, I think, is really exciting and inspiring for more work to happen now.

Host: I can’t agree more. It feels like there’s a “call” both to the technical designers and makers, and the end users that need to say hey, I need to pay attention. I just can’t be lazy. I need to take agency.

Kate Crawford: I think that’s absolutely right.

Host: Kate, before we go, what thoughts or advice would you leave with our listeners, many of whom are aspiring researchers, that might have an interest in the social impact of AI. I would say go broad on this for both computer scientists, social scientists, any kind of, you know… the interdisciplinary crowd.

Kate Crawford: Wow. Well, first of all, I’d say, welcome! You’re actually asking some of the most important questions that can be asked right now in the research world. And it would be amazing to see people really start to dig into specific domain questions, specific tools, and really start to say, you know, what kind of world do we want to be living in, and how can our tools best serve us there? In terms of resources that you can go to now, there really are some great conferences. Even the big you know, machine-learning conferences like NIPS have workshops focused on things like fairness and accountability. We have the FAT/ML conference which is annual. But there’s also, you know, the AI Now conferences which happen every year. And a lot of, I think, discussion that’s been happening in a series of groups and reading groups in various cities that people can connect with. And I just think there’s like a thriving research community now that I’m certainly starting to see grow very rapidly because these questions are so pressing. So, in essence, I’d say regardless of your field, AI is going to be changing the way we think and work. And that means that, likely if you’re a researcher, this is something that you want to start caring about. And please, if there are ways that I can help or if people want to get in touch, they are welcome to do so.

Host: Kate Crawford, thank you for taking time out of your schedule. Like I said, I want to keep talking to you forever.

Kate Crawford: Such a pleasure. Thank you so much, Gretchen. It was great.

(music)

Host: To learn more about Dr. Kate Crawford’s work on the social impacts of AI, and to download the most recent report from AI Now, visit Microsoft.com/research.

相关论文与出版物

继续阅读

查看所有播客