Autonomous systems, aerial robotics and Game of Drones with Gurdeep Pall and Dr. Ashish Kapoor

Publié

Gurdeep Pall and Ashish Kapoor on the Microsoft Research Podcast (opens in new tab)

Episode 100 | November 27, 2019

There’s a lot of excitement around self-driving cars, delivery drones, and other intelligent, autonomous systems, but before they can be deployed at scale, they need to be both reliable and safe. That’s why Gurdeep Pall, CVP of Business AI at Microsoft, and Dr. Ashish Kapoor (opens in new tab), who leads research in Aerial Informatics and Robotics (opens in new tab), are using a simulated environment called AirSim (opens in new tab) to reduce the time, cost and risk of the testing necessary to get autonomous agents ready for the open world.

Today, Gurdeep and Ashish discuss life at the intersection of machine learning, simulation, and autonomous systems, and talk about the challenges we face as we transition from a world of automation to a world of autonomy. They also tell us about Game of Drones (opens in new tab), an exciting new drone racing competition where the goal is to imbue flying robots with human-level perception and decision making skills… on the fly.

Related:


Transcript

Host: Welcome to the 100th episode of the Microsoft Research Podcast! Join me in celebrating with two special guests, the head of Microsoft’s Business AI division and the lead researcher for aerial informatics and robotics, as we explore how the power of simulation is paving the way for robust autonomous systems in real world situations. 

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga. 

There’s a lot of excitement around self-driving cars, delivery drones, and other intelligent, autonomous systems. But before they can be deployed at scale, they need to be both reliable and safe. That’s why Gurdeep Pall, CVP of Business AI at Microsoft, and Dr. Ashish Kapoor, who leads research in Aerial Informatics and Robotics, are using a simulated environment called AirSim to reduce the time, cost and risk of the testing necessary to get autonomous agents ready for the open world. 

Today, Gurdeep and Ashish discuss life at the intersection of machine learning, simulation and autonomous systems, and talk about the challenges we face as we transition from a world of automation to a world of autonomy. They also tell us about Game of Drones, an exciting new drone racing competition where the goal is to imbue flying robots with human-level perception and decision making skills… on the fly. That and much more on this episode of the Microsoft Research Podcast. 

(music plays) 

Host: I want to welcome everyone to – are you ready? – the hundredth episode of the Microsoft Research Podcast! This is going to be a fun show because with me I have Gurdeep Pall, who is a Corporate Vice President at Microsoft and head of the Business AI division, and he leads the AI and Research Leadership Ream. And beside him is Dr. Ashish Kapoor, who’s a podcast alum. My guest on Episode 22. Welcome back! 

Ashish Kapoor: Yeah, thank you. 

Host: And Ashish is the Senior Principle Research Manager for the Aerial Informatics and Robotics Group at Microsoft, aka AIR. I’m glad to have you both here on this special episode. Welcome to the podcast group Gurdeep Pall and Ashish Kapoor! 

Ashish Kapoor: Yeah, thank you very much. Very excited to be here. 

Gurdeep Pall: Yeah, likewise, really excited. 

Host: So I’ve just introduced you briefly but I’d like each of you to give a little more detail on what you do for a living. Let’s start with you, Gurdeep. You’ve worn a lot of hats here at Microsoft and in other places as well. What’s your focus now? What gets you up in the morning? 

Gurdeep Pall: Right now, I’m focused on bringing emergent AI to interesting business problems. And I couldn’t imagine, at this point in my career, working on something more exciting because, the way we think of it, AI is the fourth industrial revolution. So to the extent that we can bring AI into all these different domains around us is one of the most exciting opportunities, I think, ever and certainly for my career. 

Host: Is that your mandate, then, is to bring AI to the world at large? 

Gurdeep Pall: Yes, exactly. So with a slightly more focused way in a sense that, you know, AI, of course, can be applied to many things, and Microsoft has lots of products, and you can infuse AI into those products. My focus is, how do you create new businesses with emergent AI? 

Host: Ashish, give us a “drone’s eye view” of your work and what you’re doing. What gets you up in the morning? 

Ashish Kapoor: Yeah, work? Who works? We play! Yeah, so I would say I’m very blessed to work in an organization where we have the opportunity to pursue some of the ambitions. Specifically, what I do at Microsoft is lead a research group that’s working at the intersection of autonomous systems, machine learning and simulation. So specifically, what we are interested in is building these intelligent agents, and when you think about intelligence, machine learning and artificial intelligence have a big role to play in that. And so what we try to do is take a look at some of the, I would say, techniques on the edge, innovate in those techniques, and then try to solve these hard problems in autonomy. People have been doing machine learning for several decades now, but most of the machine learning right now stops at making predictions. But when you start building systems that act upon those things, that’s when things get interesting. And that’s what we do. That’s what we start to think about: how we can bring these machine intelligence, machine learning techniques and apply to systems which act upon it, and autonomous systems are a great example, that need to act upon this kind of inferences that machine learning folks have been doing for years. 

Host: Before we really get into it, I’d like to establish for our listeners why the two of you are here together. Obviously, Gurdeep, you’re the business side of AI and Ashish, you’re the research side of AI. But describe your relationship in terms or how you know each other and how you work together. 

Gurdeep Pall: Great, I’ll start. So, you know, as part of my charter to look at emergent AI and see if new businesses can be created with it, the MO that I have is actually very simple. I just spend a lot of time talking to MSR folks, you know, Eric, Peter, and folks inside their organizations, and I’m basically looking at what area has come far enough along that this, potentially, can be channeled towards an existing business problem? You know, I’ve known of Ashish for a long time. Eric Horvitz used to talk about him all the time on the phone to me. And then, you know, I got to understand the work that he was doing. So as we were thinking about this sort of overarching autonomous system thing, it was very natural for me to work closely with Ashish. And then eventually, as is pretty common with the areas that we have in Business AI, Ashish moved into my team because typically we have research, engineering and business folks all together working closely on one particular mission. So Ashish is heading up the research side of that. 

Host: Okay. 

Ashish Kapoor: So we were part of core Microsoft Research and it was apparent that the impact of some of the work that was happening in our group went beyond just the research impact. It had the potential to empower a lot of machine learning developers and researchers who might want to foray into robotics, for instance. While in research, we have expertise in catering to research crowd, so specifically writing papers and talking about academia. In fact, we had a lot of folks from academia looking into it. And slowly, it trickled into other parts of engineering as well. And it made sense for us to, you know, start thinking something bigger than academia, something bigger than research. And the technology is mature enough, but it needs guidance and, I would say, experience of folks who can build businesses around it. 

Host: Right. 

Ashish Kapoor: So consequently, I am a research team embedded in the larger organization where we continue to innovate and work on technology while keeping an eye on how it can empower and be useful to other folks. 

Gurdeep Pall: You know, and I’ll add to that, you know, Ashish is a researcher, but what I really think makes him almost like perfect for the Business AI organization is because he really wants to bring that research into real world solutions. And I think that desire is very important, because a lot of times, you know, researchers prefer to continue pursuing what they’re, you know, very interested in from a research angle perspective, but he’s very interested in how that can actually be… 

Host: Right.  

Gurdeep Pall: …commercialized or can have a bigger impact. So in that sense, it’s really great to have him. 

Ashish Kapoor: Yeah, and likewise I would say, right, I mean, if you are building a business, instead of just thinking about monetary aspects, I mean, you know, we have the support where the research aspects are equally important. In fact, they are the center of it. So having that kind of balance is very important. So it’s a great place to be. 

Host: Gurdeep, Microsoft is not the only company exploring autonomous systems. Talk about the broader ecosystem in this field and Microsoft’s position within it, maybe starting with your perspective on Microsoft’s evolving role in society and how the company’s goals evolve with it? 

Gurdeep Pall: Yeah, these are great questions. So, you know, I think at the highest level, I think one way to think of what’s happening around us is, you know, we are really at the onset really of the fourth industrial revolution. If you look at what happened with the first industrial revolution, with the steam engine and the factories that got created with it, I mean, it changed societies. This whole notion of childcare, you know, didn’t exist before the first industrial revolution. And the reason it came up is because people, for the first time, left their larger families and moved into these towns which were built around these factories. That’s the kind of far-reaching impact the first industrial revolution had. And then each one of them had a similar impact as well. 

Host: Mmm-hmm. 

Gurdeep Pall: And I think this fourth industrial revolution is going to be just about as dramatic. Maybe more. And I think we just have to prepare for that. But coming down to, you know, specifically what we are focused on… 

Host: Yeah. 

Gurdeep Pall: …today, we see a big shift from automation slowly towards autonomy. 

Host: Mmm. 

Gurdeep Pall: Now, automation has basically enabled a level of productivity that you see today. But automation is very fragile, inflexible, expensive… it’s very cumbersome. Once you set them up and when everything is working well, it’s fantastic, and that is what we live with today. You know, autonomous systems, we think, can actually make that a lot easier. Now the broad industry is really still oriented toward automation. So we have to bring that industry over slowly into this autonomous world. And what’s interesting is, while these folks are experts in mechanical engineering and operations research and, you know, all those kind of important capabilities and logistics, they don’t know AI very much. 

Host: No. 

Gurdeep Pall: They don’t know how to create horizontal tool chains which enable efficient development and operations of these type of systems. So that’s the expertise we bring. I’d add one more point to it, is that the places we are seeing autonomous systems being built, like autonomous driving, they’re actually building it in a very, very vertical way. And, in some ways, that’s a bit of a legacy of how robotics systems have been built in the past. And we don’t think that’s a scalable way, because there’s going to be literally thousands and thousands and thousands of different kinds of autonomous systems, so you need these horizontal building blocks, and that’s what we’re focused on. 

Host: Ashish, I want you to answer the same question but, perhaps, from a research perspective and where you see Microsoft Research, and Microsoft in general, as a player in this space. What’s hype and what’s hope and what’s real? 

Ashish Kapoor: Yeah, yeah definitely. So, you know, automation is where everyone has been focusing on and autonomy promises to make these things simpler because now your automation is more robust. It has the ability to decide when things are not correct and correct them if needed. And, of course, folks have been going after certain verticals, like self-driving. They are spending billions of dollars. What if I want to have a different kind of robot that has similar kind of capability? Do I go and spend a few billion dollars again to do that? Probably not! 

Host: If you’ve got it, though… 

Ashish Kapoor: Yeah!, So… Also, on the other hand, from the research perspective, we are increasingly seeing the impact of machine learning technologies on robotics. So, you know, technologies such as reinforcement learning, imitation learning and now, you know, also, unsupervised or self-supervised learning are playing a bigger and bigger part in autonomy and robotics. 

Host: Mmm-hmm. 

Ashish Kapoor: My background is in machine learning and, the way I went into robotics was, you know, there was a vertical that I really cared about, but it was extremely difficult. To get into robotics, you not only need to worry about hardware, you also have to worry about orchestration of things. You know, having several people making sure things are safe. If your robot does something stupid, it breaks down, and then you’re stuck for few days. So, consequently, a lot of our work in the group have focused on removing some of these obstacles. If you are a machine learning researcher, developer, if you are a programmer, and if you want to build a robot, what can we bring to you so that the process for automating and making something autonomous is much simpler? So that you don’t have to spend billions of dollars trying to create your ecosystems and everything. 

Host: Right. 

Ashish Kapoor: It’s a tool chain. There are modular parts. There are things like simulations. There are things like deep learning methodologies. There are algorithms that you can mix and match almost like Legos. And can we enable that? 

Host: My brain is going in a thousand directions now because of you two. Thank you very much! I want to put a little finer point on the difference between automated and autonomous. They both share the same root word. What’s the fine differentiation there? 

Gurdeep Pall: The main differentiation is that, let’s say you’re a BMW car assembly line. And today, most car assembly is done by robots. Now, let’s say BMW is launching a new car line. They now need to basically configure that entire assembly line with each station to perform different tasks. Today, the way this line is assembled, is that these robots, they know that exactly at this point in the XYZ space, that they need to stop the arm and then they need to basically have certain torque set… That’s all it does. Then it moves out of the way, new car comes in, they go exactly to the point and they turn it and it goes on, right? 

Host: Got it. 

Gurdeep Pall: So now, if the car happened to stop half an inch before, it’s going to come to that same point, and then it’s going to have a big problem. That is automation for you. When everything is working exactly correctly it is like fast chu, chu, chu, chu… 

Host: Really good. 

Gurdeep Pall: Exactly. In an autonomous world, the arm has to find this particular place and now this thing seeks and finds the right place, even if it is, you know, five inches this way, and it will find it and just align itself correctly and then go for it. 

Host: Okay. 

Gurdeep Pall: That is autonomous. 

Host: Okay. 

Gurdeep Pall: Where it has a certain level of planning capability and then control capability to deal with unforeseen situations. 

Host: So the autonomous setup is upstream of the automated because it’s still going to be running the same kind of thing, but you’re giving it a brain. 

Gurdeep Pall: Exactly. 

(music plays) 

Host: Let’s get technical. You’ve just alluded to machine learning and some various flavors therein, and then I really want to talk about reinforcement learning now, especially because it seems to be the brains behind the autonomous systems we’re talking about. So, Gurdeep, would you start by talking about the machine learning landscape and the various methodologies that are involved? 

Gurdeep Pall: Sure. When we talk about AI today, probably the most prevalent methodology is something called supervised learning. And supervised learning, the way those algorithms work is that you have a lot of data and you have labels for the data, which basically tell the algorithm how it should adjust its weights. Then you have unsupervised learning, which is on the other extreme, where you get lots and lots of data, but you really don’t have much labels and the algorithm has to make sense of it. And then you have semi-supervised, which somewhat sits in the middle where you may have some data, something the machine figures out. And you know, those methods have worked very well for certain kind of problems, right? Like for example, let’s say language. If you want to train our speech stack to basically pick up a new language and train for it, it’s pretty easy. You get a thousand hours of labeled data and, you know, these algorithms are very sophisticated they can go pretty much, learn up to a certain level of performance. What happens for problems that exist in the real world is that, either it’s very difficult to capture the data, or it is nearly impossible to capture all the data that you would ever need to operate inside that. 

Host: Right. 

Gurdeep Pall: So reinforcement learning is another approach to AI, which basically says that I’m going to learn by actually taking action in the real world and every time I take action get smarter. And in some ways, you know, this model is how humans learn as well. So, at a high level, that’s what reinforcement learning is. And the way it is actually implemented, practically, is that, you know, you have the state, which is the state that you’re acting against, the algorithm will take an action, and the state changes, and you get a reward back whether you’re getting closer or farther from your objective. And then at that point you take the next action. You get a reward back. And you stay in this loop… eventually you figure it out. 

Host: Okay. 

Gurdeep Pall: So that’s reinforcement learning for you. 

Host: So, Ashish, when we talked, we’ve talked many times, you were mentioning how reinforcement learning has been really successful in game scenarios, and when we get into the open world it’s a whole different ball of wax. You gave us an overview, a little over a year ago, on AirSim and why simulation is important in this particular milieu.  

Ashish Kapoor: Yeah, yeah. 

Host: Review for us, for those of our listeners who have not heard that podcast, tell us what AirSim is and why the “sim” part is so important when we talk about these kinds of systems. 

Ashish Kapoor: So AirSim aspires to be a near realistic simulation for AI and robotics systems. So, in a nutshell it’s a video game on steroids that softwares are playing. And the game comprises of a robotic agent that’s operating in an environment which is akin to reality. So Gurdeep just mentioned about technology such as reinforcement learning and imitation learning, which are trying to solve this decision-making problem. So, as an autonomous agent, you need to make decisions, and decisions are not just about now, but it also needs to factor into account the future consequences as well. And that’s what makes them hard. So, for instance, when you’re playing games, so things like Go or Atari games or Pac Man or any other video game, right? As a gamer, you have a sense of what your actions, right now, have a consequence in future. And, in order to bake that effect into your machine learning, you need to play millions and millions of times. So for instance, you know, Atari games. You know, we’ve been hearing about how machine learning can solve some of these video games at superhuman level, almost getting perfect score. For instance, some of the recent work at Microsoft Research in Montreal talks about PacMan. But one thing you need to realize is that you are trying to play these games several hundred millions of times before something reasonable starts to appear as a policy. We do not have such luxury in real world. I cannot have a robot bonk into things a million times before I start to learn something new. So consequently, simulation. And more importantly, these simulations, with cloud compute, specifically, like, you can have, you know, millions of instances on Azure where these robotic systems, which, at the back end, have a deep neural network or some kind of machine learning agent guiding them, it gathers these experiences. And we can do that very quickly. We don’t have to wait days and days, and we don’t have to ruin hundreds of millions of robots. You can do everything in simulation, get this data, and given that AirSim is trying to be near realistic, the data that you gathered is fairly close to reality. So, you can hope that the policies and the machine learning intelligence that you’re generating will transfer to real world as well. 

Host: Okay. So when we talked more than a year ago, this was sort of kind of news, right? I would like to know where it is now? I mean what has happened in this last year? Has it found traction? 

Ashish Kapoor: AirSim was released as an open source project two and a half years ago and since then we’ve seen increased usage. There are several hundred folks using it, for sure, and then there are some very close academic collaborators that have used technologies based on AirSim to solve very difficult problems. For instance, collaborators at USC, they have been using AirSim to train computer vision models to recognize poachers from drones that are flying over savannah. So, you know, you can imagine how difficult that problem is, that you have some drone flying at night over savannah and your goal is to detect all of these warm bodies, be it animals or poachers, and which are a few pixels big. Again, where do you get real world data for that? 

Host: No. 

Ashish Kapoor: So consequently, AirSim to the rescue, where we instantiate the entire savannah in simulation and start collecting data at scale. So that’s one example. And similarly, I mean, folks have trained racing cars. So for instance, our collaborators in Technion, they have built an SAE racing car – so SAE is Society of Automobile Engineers – and they have a competition on F1 racing cars, but autonomous. And so they train their perception models in AirSim to do the controls. So people are solving very hard challenges in autonomy using this technology. 

Host: And very different ones. I mean, from poachers to F1 racing cars. My mouth is open right now. It’s like, mind-boggling. And interesting. 

Gurdeep Pall: I would add to that. I mean it’s… got… now at 9,000 stars on GitHub? 

Ashish Kapoor: Yes, yes. Yeah, yeah. 

Gurdeep Pall: …so it’s at 9,000 stars… AirSim. 

Host: Mouth is still open… Well, I want to talk about Game of Drones. And this is a competition that’s coming up for NeurIPS 2019. Ashish, before I talked to you this week, I was actually unaware that drone racing was a thing, let alone a hugely popular thing. And then I went down the YouTube rabbit hole and saw what it was, and it’s amazing. And we talked about simulated drone racing. They actually have live drone racing… 

Ashish Kapoor: Live drone racing, yeah…! 

Host: …in empty arenas. These guys are in cages and then the drones going like Harry Potter in Quidditch, kind of thing. I was blown away. So, talk about what you’re doing in this arena, no pun intended, of drone racing, and how Game of Drones is playing into that and what you hope to accomplish from like a research and science perspective on it. 

Ashish Kapoor: So Game of Drones is a competition that is being held at NeurIPS 2019, and this was jointly proposed by Microsoft, Stanford University and University of Zurich. So, yeah, you know, this is in collaboration with academia and it’s trying to solve, I would say, one of the hardest problems in autonomy. I would even say that this is one of the goals to aspire for, ultimately. Like, you know, we talk about all these superhuman feats that AI algorithms can achieve, but they come no way close in solving this. So what’s really happening in a drone race, in real world, when people are playing with it, is that a person is sitting on a chair and he’s wearing goggles. And through these goggles, he can see a video stream through a camera mounted on a drone that’s flying at an incredible fast rate.  

Host: Yeah. 

Ashish Kapoor: And by looking at that video feed, a human brain is able to essentially send four numbers, you know, those are the control commands, which is the thrust, yaw, roll and pitch, right? The whole image gets translated to four numbers, and you can go to YouTube and see all those videos, what fantastic things they can do… 

Host: I did! 

Ashish Kapoor: They can do flips. They can do quick turns. I mean, it’s amazing, the power of human brain. So the big question is, can you actually have this thing driven by a software? So all your software is seeing is that camera feed. And can you translate all that information into useful control signals? And, more importantly, let’s make it even more challenging. It’s not just about you going through that course. What if you’re also participating against another drone? So now you start thinking about strategies as well. So it’s not just about obstacle avoidance, it’s also about how can I beat my opponent? So it’s bringing in all of these difficult problems which lie at the intersection of perception, controls and planning. And the hope is that, by fielding such a competition, we would start to understand in what would it take to build technologies that can solve this hard problem. Now mind you, trying to do this in the real world is extremely hard because, as I said, our algorithms are nowhere near in solving even 1% of this challenge. So consequently, we field this in simulation. So there are courses that are built in AirSim where competitors can try their algorithms and see how well they fare. There is an ongoing leader board where, you know, you can track your progress against your opponents and then there are prizes as well. 

Host: Gurdeep, what do you say about this thing? 

Gurdeep Pall: You know, what is amazing to me is how relevant this is in the world. I mean, you don’t even have to roll out like hundreds of years. You can see how relevant this is going to be in the next ten years. Today, people are making tall claims about autonomous driving and oh, it’s going to be this year, it’s going to be next year, somebody says fifty years, so there’s all this going on. And actually, what is underlying all that is that there is this missing piece that nobody has cracked. The missing piece is that most of these autonomous driving systems are actually built with old school robotics. And the problem with old school robotics is that it is not sufficient to solve these complex problems. And that is the missing piece. And this Game of Drones goes to the heart of that. 

Host: Gurdeep, I want you to go on a little bit there. There are some other big issues on the horizon related to exactly what you were just talking about when autonomous systems head to real world applications and they involve upstream work with like legal issues, business issues, regulatory issues. So even as Ashish’s team over here is setting up Game of Drones and doing their own research, what’s going on in this other lane? 

Gurdeep Pall: You’re absolutely right. You know, this is sort of a brave new world with a whole bunch of new things that you open up. If you’re, let’s say, the city of Bellevue, right? And you know, here you were, just about the time you figure out how to get traffic right and coordinating the lights changing, and you’re driving on a main arterial, you know, now, suddenly, you have delivery robots, you have drones, you have self-driving trucks, you have cars, you have all these things going around, and now, you know, things are messed up all over again. You just start thinking about that problem. And so how are they going to deal with this really complex world? So, I think, you know, this is an area, some of the areas, that we are starting to look at is, what kind of solutions will they need in a world like this. 

Host: Is there, from the business perspective… because I would assume the researchers are just working on the technology… 

Ashish Kapoor: No… I think it’s important to actually, from a research point of view, take a stand on this. So, you know, we think deeply about this in research as well. 

Host: Okay. 

Ashish Kapoor: And just to give you a nugget, in our research group, the perspective that we have is that if their system failure happens, you should be able to find someone, some human, who is responsible for it. So for instance, if there is an autonomous system that you have designed, a human tells the system, these are the safety specifications. I always want you to be in these things. So, if our system fails because there was something outside of those specifications, it’s a human’s fault, because as a designer, he didn’t think about that. So there is a philosophy, even in research, that we need to adhere to. In our case, we take this thing very seriously when we say that every mistake needs to be traced to a responsible human who can then take corrective action on it. 

Host: I want to talk a little bit more, Ashish, about something you mentioned on airborne collision avoidances systems and the whole concept of air traffic controls, if we’re talking about flying robots. Even now, that particular job is really high stress and very rule-based for humans, but you’ve painted a picture where we’re going to have hundreds of autonomous vehicles. How do you even begin? 

Ashish Kapoor: So, I mean, you’re absolutely right. The job of ATC is one of the most difficult ones, and it’s considered most stressful job in the world because so much is at stake. And these guys do an amazing job trying to keep us safe while the demand on the system is increasing. And, as you can imagine, as you see more and more drones and flying taxis, this system would not scale. So we will have to start thinking about automated decision-making. So I’m, again, not using the phrase autonomous, but more about automated decision-making, because… 

Host: Interesting. 

Ashish Kapoor: …because, you know, there should be some human accountability, as I said, but a lot of tasks should be automated and almost on the verge of autonomous systems. 

Host: Okay. 

Ashish Kapoor: So that’s where airborne collision avoidance systems comes into effect. If you have these vehicles which are broadcasting their state –  so not just flying vehicles, it can be cars as well, it can be forklifts, it can be delivery robots – but they are broadcasting their state and their intent on what they want to do… 

Host: Okay. 

Ashish Kapoor: …then can you come up with decision-making modules that operate on each robot so that they can take care of themselves? An example that I really like to show is, if you’ve been to Japan, you have this intersection Shibuya where the red lights happen across, and I think it’s five or seven different intersections merge together, and you have three hundred, four hundred people crossing at once. And within, you know, three minutes everyone has crossed. There are no collisions. What would it take us to build something like that? But again, this is right at the intersection of machine learning, autonomy, robotics. 

Host: Everything you come up with brings up new things is like, this is research forever, right? Momma needs a new pair of shoes! 

Gurdeep Pall: Totally. 

Host: Well, there’s another competition that I’d like to talk about. Ashish, you and a team of collaborators just employed your AirSim technology to win a pretty amazing competition called the DARPA SubT Challenge, SubT standing for subterranean. And you won it rather decisively. So tell us about this challenge and why it was an important proving ground for AirSim. 

Ashish Kapoor: Yeah, so Team Explorer, which was spearheaded by Carnegie Mellon University and Oregon State University, so they are our close collaborators, and they had been using AirSim for many other research projects, and when they decided to participate in the DARPA Subterranean Challenge, we had conversations on how simulation could help them achieve the mission. So just a quick introduction on what DARPA Subterranean Challenge is: terrains under the ground, be it manmade caves, mines, natural caves, tunnels, etc., are very difficult to navigate and can be very dangerous. So think about search and rescue operations. Instead of sending humans, it’s very useful if you can have robots that can navigate, that can map, that can go over obstacles and find things. So SubT Challenge is centered around that. And our collaborators at CMU and OSU, they have a pretty amazing robotics system that’s designed to address those challenges. However, how do they test these systems? How do they get training data so that these systems can learn to see objects in these worlds? So consequently, we created a simulation environment that’s several miles long set of caves with different kinds of effects such as fog and wetness and different objects hidden at different places so that they can go and test their algorithms and systems before our Team Explorer needs to solve these in real world, right? 

Host: Hmm. 

Ashish Kapoor: So the simulation environment helps them by validating their methodologies, as well as collecting training data for their perception models. So the competition itself was to identify several of these objects that were lying in a real cave. And you had to send your robots and you had to identify. So the objects, for instance, were a survivor, backpack, a cell phone, a drill and, I believe, a flashlight. 

Host: So this was a real cave? 

Ashish Kapoor: This was a real cave. 

Host: And real robots? 

Ashish Kapoor: And real robots and there were all these objects. And the team, whoever could go and detect maximum number of these objects, won. 

Host: And so the simulation prior to, gave you an advantage for executing it in real life? 

Gurdeep Pall: Yeah. And all the algorithms that were there, whether it be, you know, finding it and planning and all that, would be tested in this simulation environment. 

Host: Right. 

Gurdeep Pall: And to me, I mean, the biggest thing about this, other than achievement of, you know, winning the DARPA Challenge is always a… 

Host: Cool thing. 

Gurdeep Pall: …you know, it’s a pretty swaggy thing. But, to me, what is really interesting about it is this thesis that we have that you can create these brains in a simulation environment and you can perform much, much better than anything else in the real world. I mean, this got completely established. In fact, you know, I hope that the industry sees this as a tipping point in moving from some of the old ways of doing things towards kind of what we are pushing. 

(music plays) 

Host: Well, it’s time for “what could possibly go wrong?” on the Microsoft Research podcast, and, when we’re talking about flying robots, the answer is, a lot! So the general risks are pretty obvious, but I know each of you have thought through some specific challenges that keep you up at night. Talk, in turn, about issues of safety and trustworthiness of autonomous systems, and how you’re addressing those so that I don’t lose sleep at night. Gurdeep? 

Gurdeep Pall: Great. Let me first frame it in the context of this industry and Microsoft and what we’ve learned. You know, the good thing is that there’s just a brief history of computing. You know, you don’t have to go back to, you know, 10,000 years. I’ve worked on Windows NT and, you know, we thought we were being very careful about making sure the code can’t be hacked and this and that. We had no idea. The moment the internet got connected and these machines got on the internet, you basically could be attacked from anywhere and there were such strong incentives for people, for bad actors, to really do their best work. We can’t afford to see that happen in the autonomous systems space. 

Host: Mmm. 

Gurdeep Pall: We have to proactively get in and say, we need to solve these problems before these systems are unleashed in the real world because the consequence of things failing in the real world is very, very high. I mean, this is where it gets completely real. So, that is the broader context, I would say. And by the way, I should plug in, I mean, I think the work that Eric is driving with AETHER, I’m a huge fan of that because I think, as a company, we’ve said we’re going to proactively look at AI all up and, you know, identify some of these things. And I think, in our case in particular, we’re looking at, even from a product perspective, or the tool chain that we are building in my team, how this is one of the differentiated pillars. We call it “trustworthy autonomy.” And we made it one of the three pillars so that, over time, we get more and more sophisticated and whoever is using our tool chain to create autonomous systems can actually benefit from all that. 

Host: Ashish, what would you add? 

Ashish Kapoor: Yeah, so… 

Host: What keeps you up at night? 

Ashish Kapoor: I mean all those things that Gurdeep mentioned, but I’ll mentioned one specific thing, which keeps me up at night because something that, you know, as a researcher or a technologist, it’s hard for me to influence. We are making progress. We are thinking about it. And the community at large is getting aware of those things so there are at least some progress. But here is the deal: I think the pace at which the technology is evolving is very rapid. 

Host: Mmm. 

Ashish Kapoor: And I’m afraid that, you know, the regulation might not be evolving that quickly. But the reality is the following: right now, if you want to build an autonomous system, there are plans out there on the web. Anyone who has some kind of an engineering expertise can just go ahead and build it very rapidly, right? And of course, you know, a lot of good actors who are trying to make things better in autonomous systems, they’ll play by regulation, right? However, many of those bad actors will not. So I think that’s a tension that I don’t know how to resolve. 

Host: So as you two have identified some things that I think nobody would argue, these are the highest level “what keeps me up at nights” on the planet… Aside from thinking about it, are there things that you’re kind of building or baking into your research and your execution that would say, hey, we’ve got this? 

Gurdeep Pall: Absolutely. In fact, I think Ashish should talk about some of the research work that he’s been doing and we are actively looking to see how we can productize that… 

Host: Okay. 

Gurdeep Pall: …as part of our trustworthy autonomy pillar and feature set. So, yes, we are actively looking at that. 

Ashish Kapoor: You want to talk about that…  

Gurdeep Pall: Yeah, yeah. 

Ashish Kapoor: So safety is very important. Not from the point of view of, you know, a hacker attacking a system, our cyber-security flaws and things like those, but also from the fundamental technology point-of-view. 

Host: Hmm. 

Ashish Kapoor: So for instance, your robots, or your autonomous systems, they will have sensors and they will see the world through, very likely, a machine learning system. And we know machine learning systems have problems. Like bias is one, you know, they make mistakes. And when we know that these systems have problems, how do you then assure safety? So that’s a big question that we are trying to answer. And the way we are thinking about this is now building policies that are optimal in the worst case. 

Host: Mmm. 

Ashish Kapoor: So, we need to look down to history and things about how airplanes and spaceships were designed. You know, they solved a very difficult problem and now we can hop onto an airplane without worrying about our safety. What would it take us to do that? And we need to learn from those fields and we need to look at the technologies they employed. So that’s, you know, in a nutshell, that’s what we’re looking into. 

Host: At the end of every podcast I give my guests the opportunity to say anything they want to our listeners. So Gurdeep and Ashish, do you have any parting advice, warning, wisdom or predictions? What challenges lie ahead for autonomous systems in your minds? 

Ashish Kapoor: If you are an aspiring roboticist, this is the time to come out. The reason being that, you know, the technology is there. There are a lot of people excited about it and definitely the tool chains are becoming much easier. I have my son who wants to do robotics and it’s quite hard to get started on it, but I’m hoping, you know, by the time he goes to middle school, he should be able to build these autonomous systems. And he could imagine and he could build these things very easily. So you know, if you’re excited about robotics and machine learning, I think there are stuff out there and I would encourage everyone to come out and you know, play and contribute. 

Host: Hmm. Gurdeep? Last words? 

Gurdeep Pall: You know, humans overestimate what can happen in three years and underestimate what happens in ten years. And we are embarking on another one of those moments where I think ten years from now, the world is going to be very, very, very different. And it’s inevitable. So it’s not like you know, let’s slow down or whatever. I think we just have to lean in and go really fast, and pre-think some of these things so that, as the world really adopts these things, it is done in a way that we can all survive it. You know, I have to remind people that it’s only a couple years ago, the iPhone was ten years old. And when the iPhone came out, nobody could have predicted how it changed lives. And I think you could say that with the internet, you can say that with the personal computer and so many things. I think this is one of those moments. So we have to get into it, both getting ahead of it, leveraging it, you know, getting our kids to be trained in these things. And I think we just have to get in as a society. 

Host: Gurdeep Pall, Ashish Kapoor, thank you for making Episode 100 this fantastic. 

Gurdeep Pall: Thanks for having us. 

Ashish Kapoor: Thank you very much. 

To learn more about how Microsoft is working to bring emergent AI into real world applications, visit Microsoft.com/research

Lire la suite

Voir tous les podcasts