Episode 85, August 14, 2019
In an era of unprecedented advances in AI and machine learning, current gen systems and networks are being challenged by an unprecedented level of complexity and cost. Fortunately, Dr. Ganesh Ananthanarayanan, a researcher in the Mobility and Networking group at MSR, is up for a challenge. And, it seems, the more computationally intractable the better! A prolific researcher who’s interested in all aspects of systems and networking, he’s on a particular quest to extract value from live video feeds and develop “killer apps” that will have a practical impact on the world.
Today, Dr. Ananthanarayanan tells us all about Video Analytics for Vision Zero (an award-winning “killer app” that aims to reduce traffic-related fatalities to zero), gives us a wide-angle view of his work in geo-distributed data analytics and client-cloud networking, and explains how the duration and difficulty of a Test Cricket match provides an invaluable lesson for success in life and research.
Related:
- Microsoft Research Podcast: View more podcasts on Microsoft.com
- iTunes: Subscribe and listen to new podcasts each week on iTunes
- Email: Subscribe and listen by email
- Android: Subscribe and listen on Android
- Spotify: Listen on Spotify
- RSS feed
- Microsoft Research Newsletter: Sign up to receive the latest news from Microsoft Research
Transcript
Ganesh Ananthanarayanan: When we started, we looked at the winning entry in the tracking challenge which was essentially a challenge where the greatest computer vision people compete to produce the best object tracker. And what we saw was that the winning tracker runs at one frame a second, on an eight-core machine in parallel. And, to just give a perspective, a camera’s frame rates are thirty frames a second. So, it was kind of clear to us at that point that, if video analytics was to take off, systems and networking folks needed to put their heads into the game and then jointly work with the vision researchers to get a working solution.
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: In an era of unprecedented advances in AI and machine learning, current gen systems and networks are being challenged by an unprecedented level of complexity and cost. Fortunately, Dr. Ganesh Ananthanarayanan, a researcher in the Mobility and Networking group at MSR, is up for a challenge. And, it seems, the more computationally intractable, the better! A prolific researcher who’s interested in all aspects of systems and networking, he’s on a particular quest to extract value from live video feeds and develop “killer apps” that will have a practical impact on the world.
Today, Dr. Ananthanarayanan tells us all about Video Analytics for Vision Zero (an award-winning “killer app” that aims to reduce traffic-related fatalities to zero), gives us a wide-angle view of his work in geo-distributed data analytics and client-cloud networking, and explains how the duration and difficulty of a Test Cricket match provides an invaluable lesson for success in life and research. That and much more on this episode of the Microsoft Research Podcast.
Host: Ganesh Ananthanarayanan, welcome to the podcast!
Ganesh Ananthanarayanan: Thank you. Thanks for having me here.
Host: I’ve heard some people say your last name could be your secure password?
Ganesh Ananthanarayanan: Right. I spelled it wrong once, so. The first year in the US when I had to file my tax return, I added an extra “an” in the end. And so, as a result, the IRS never sent me my refund.
Host: That’s awful!
Ganesh Ananthanarayanan: So, it wouldn’t be a useful password if I get it wrong.
Host: Well, and you told me something interesting about what your name means…
Ganesh Ananthanarayanan: Yeah, the name actually translates to something like “something that never ends.” It’s unending. And it’s kind of apt, isn’t it, yeah?
Host: Well, I’ve got a crazy last name, too, but as I said, welcome. So, you work in mobility and networking here at Microsoft Research, and your work is broadly situated in the area of systems and networking. So, we’re going to do a deeper dive into your current areas of research shortly, but to kick us off, tell us, in general, what gets you up in the morning? Why do we need systems and networking research and why are you doing it?
Ganesh Ananthanarayanan: That’s a good question. I mean, what gets me up, sort of like professionally, in the morning is frankly, just the job I have. I just love this job description I have where I can, you know, sort of like creatively explore the topic that I’m interested in, keep an eye out for all the trends that are out there and, at the same time, you know, work with this huge sprawling infrastructure that a company like Microsoft has, where we can affect millions of users. So, to me that’s like the best job description and that’s frankly what gets me excited every day. Uh, systems and networking, specifically, I like it because for many – you know, sort of like from even a very philosophical level, I connect to it, because the core aspects of systems and networking are, a) you know, there are tradeoffs all the time. There’s no single perfect solution, which is kind of true if you think about it in life in general. And the second thing is, you know, you cannot hide behind idealized models. Success is what works out in the real world. And that’s kind of true with you know, life in general, and systems and networking. So, in many ways I connect with it.
Host: Well you know, we are going to go sort of all over here because you have such a broad range of interest, but you have a big interest in live video analytics, which we’ll get to in a second, but I want to stay upstream for a minute to set the stage for that. And one of the things that shows up in talks you give is this phrase, “Cameras are everywhere.” It’s both reassuring and frightening at the same time. So, we’ll get to “frightening” later in the podcast, but for now, I want you to answer two questions. Why are there cameras everywhere? And why is this a good thing?
Ganesh Ananthanarayanan: In many ways, I think the falling camera prices contribute to it. But the main reason I think cameras are seeing this pervasive deployment is the fact that there is this ability for you to monitor things, to record things. And in many ways, I think people put these cameras in there in sort of like the early stages with the hope that there will be a human who is going to look at it and, perhaps, react to certain things, and so forth. And, as we saw this potential with what, you know, computer vision could do, that hey, I could actually analyze these cameras in real time perhaps if needed, these camera streams, and do a bunch of good stuff with it. So that was kind of what we went with in terms of saying, hey, what are the kind of things that we can enable with these cameras that frankly can be very beneficial to us as a whole? I can give you an example of a like a recent demo that we did at the Microsoft TechFest. We demoed like a smart crosswalk. But if say somebody with special needs comes to the edge of the crosswalk, say in a wheelchair, then we would detect that automatically, and then have the “walk” light turned on for them. And not only that, what we also do is that, as the person is crossing, if he noticed that hey, the timer is going to go off, but that this person at the pace at which they’re walking needs some additional time to finish crossing safely, we give like an additional, say, ten second or so, so that they can finish crossing. So… so, I see a lot of potential for a lot of good that can come out of just doing these video analytics and that can happen only if cameras are everywhere, so.
Host: I’m thinking of all the places that I know there are cameras, like convenience stores, traffic light cameras, traffic cameras in general on the highways, retail scenarios. And then we go back to this idea that somebody should be able to take those images and do something, usually after the fact, right? When did it become so overwhelming that we needed computers in the mix?
Ganesh Ananthanarayanan: So, in both cases, I think even for after the fact, if we have to search through months of videos, there is like a rate at which you can’t process it if a human is viewing it. Maybe we can view that 2x the speed, but that still would take a lot of time, if I am viewing two months of video. And then, of course, then there’s an additional point where, hey, why should we stop at only doing things after the fact? Like the crosswalk scenario I just explained, that can happen only if we analyze the feeds in real time.
Host: Right.
Ganesh Ananthanarayanan: So that’s when I think we think we started getting into this thing where, hey, we needed to actually analyze these feeds in real time, live video analytics.
Host: Talk about the challenges that you faced, once you had the a-ha moment, to be able to technically accomplish that?
Ganesh Ananthanarayanan: So, live video analytics is very challenging from a systems and networking perspective as we see it. It is, you know, among the various domains for, say, systems built for AI, this was the most challenging because, you know, the amount of compute that is needed, the amount of data that is generated, the amount of storage that is needed, the kind of privacy concerns that come out of videos… these are all nothing like any other domain. And if you look at, you know, what I listed in terms of compute, the network, the storage, the security and privacy, these are all the major pillars of systems and networking, and video analytics stresses each one of them. So, from that point of view, we thought it was a pretty, you know, interesting challenge to take on. And then what we also noticed was that, you know, obviously we got excited to doing video analytics because we saw the possibilities of what computer vision could achieve. But when we started looking at the actual solutions, we also noticed that, hey, many of these solutions that are kind of cool required an enormous amount of resources for it to run, that it’s almost impractical if you were to run something like this 24/7.
Host: Right.
Ganesh Ananthanarayanan: Just to give a concrete example, you know, when we started, we looked at the winning entry in the tracking challenge, which was essentially a challenge where the greatest computer vision people compete to produce the best object tracker. And what we saw was that the winning tracker runs at one frame per second on an eight-core machine in parallel. This is for a single camera. And to just give a perspective, a camera’s frame rates are thirty frames a second. So, it was kind of clear to us at that point, that if video analytics was to take off, systems and networking folks needed to put their heads into the game and jointly work with the vision researchers to get a working solution.
Host: Okay. So, let’s say you’ve linked arms now, what is your quest? What are you trying to accomplish in this field?
Ganesh Ananthanarayanan: So, what we are primarily out to do is to democratize video analytics which is sort of being able to analyze videos live, in real time, at low cost and, at the same time, produce results that are accurate enough for the task at hand.
Host: How’s that going?
Ganesh Ananthanarayanan: It’s going pretty good! We’ve been at it for a couple of years now. We’ve had many systemic breakthroughs in terms of how we could make things substantially cheaper. We’ve engaged with a bunch of customers, in terms of analyzing their needs and so forth, so I think we have like a good sense for what it is that people want with video analytics. So, I would say we’re quite excited in terms of how it’s progressed so far and the road ahead.
Host: All right. Well we’re going to get on that road pretty quick here. It does make me laugh, however, to note what you are aiming for is cheap, fast and good. We are all looking for that holy trinity of cheap, fast and good. Usually they say pick two, right? If you want it cheap and fast, it’s not going to be good. If you want it cheap and good, it’s not going to be fast. And video analytics is like the uber-challenge!
Ganesh Ananthanarayanan: That’s actually true. I quote this point, actually, in my talks too, that in this goal of all the three axes, I usually say if you think about it, achieving any two of the three is much easier than aiming for all three. So, so totally agree with you.
Host: So, you’ve given yourself like the hardest task of all…
Ganesh Ananthanarayanan: Yeah, video is the hardest. I mean, we also are looking at it and if you look at say the whole space of internet of things, right?
Host: Yeah.
Ganesh Ananthanarayanan: Cameras are the hardest of “things.” So yeah, if we get this right, then we should be able to, you know, navigate through a whole bunch of other verticals as well in this space, so…
Host: Right. That’s exciting.
(music plays)
Host: Well, let’s get specific. And I want to focus this part of our conversation around your work in “healthcare.” Okay, I’m making air quotes around that.
Ganesh Ananthanarayanan: Okay.
Host: Because you are not doing healthcare, but – and I didn’t know this before I talked to you – traffic accidents are among the top ten leading causes of death in the world. And all the others that surround it are actually healthcare issues. So, you’ve got a hero app for that. You call it Video Analytics for Vision Zero. Tell us about it.
Ganesh Ananthanarayanan: Yes, you’re right. It kind of shocked us, as well, to hear that traffic-related fatalities were among the top ten reasons for deaths worldwide. So, when we started out this project on video analytics, we wanted to really have like a driving application where this was really useful. So, at that point, we reached out to the City of Bellevue, where we wanted to check if we could utilize their traffic cameras for any kind of analysis that they would want to do. And it so happened that the City of Bellevue had signed on to this program called Vision Zero. Vision Zero was a larger initiative that started in the late 90s in Sweden where the idea was to get the number of traffic-related fatalities to zero. And so, the City of Bellevue was interested in partnering with us. And we had like a very forward-looking and enthusiastic partner in Franz Loewenherz in the City of Bellevue. And along with Franz, the question that we asked was, how can we use these cameras to improve traffic safety, traffic efficiency and just long-term planning of traffic? So, we worked with the City of Bellevue in terms of analyzing their live feeds to produce, you know, live traffic counts, to raise alerts when the amount of traffic is abnormally high or low. We’ve helped them in terms of a bike planning initiative that they had where we wanted to do like a before/after study on how the number of cars and the number of bicycles changed.
Host: Yeah.
Ganesh Ananthanarayanan: So overall, this has been an initiative that, in many ways, has been sort of like trendsetting. We’ve presented this work at a lot of transportation engineering forums and many cities have expressed interest in terms of adopting a solution like this. So, in many ways, we believe we’ve opened the eyes of folks toward what is possible with video analytics in the space of traffic cameras. So, that’s been pretty good, yeah.
Host: Let’s get a little more technical here. Since we’re talking about cars, let’s look under the hood of Video Analytics for Vision Zero and talk about Rocket, which is the aptly named video analytics engine behind it. So, breakdown the technical components of Rocket and explain how this approach is bringing some of those traditional barriers to accurate and affordable video analytics to real people?
Ganesh Ananthanarayanan: Rocket, I think we had, like, a poster title once that said, Rocket: to enable video analytics to take off! So. Yeah. So, the Rocket stack has many components. And right at the top of the stack is this ability to express video analytics as a pipeline of various operators. So, we needed those abstractions where somebody can look at it and go, hey, what is the set of operations that I should string together for me to run this video analytics application. Specifically, an example of that could be, take the video stream, run a decoder, maybe run a filter where some of the frames can be filtered off, then you run an object detector and say, then, a classifier. And then you, of course, in the end, you have some custom logic say for counting and so forth. So, this is what sits right at the top of the Rocket stack, this pipeline expression. And then we have the pipeline optimizer, which is kind of important because, what I explained right now as an example of a generic pipeline, different choices have to be made for different pipelines, depending on what the camera stream looks like.
Host: Right.
Ganesh Ananthanarayanan: What is the kind of filter that I should use that’s best for this specific camera stream? What’s the kind of object detector that suits the best? What’s the frame rate at which I could be running it? Like, for instance, if this is a stream where say it’s a night, there’s not that much activity happening, you know I can sample out a lot of frames and I can save on resources. Whereas, if it’s in the daytime, I don’t want to be sampling frames off too much.
Host: Right.
Ganesh Ananthanarayanan: So, this is kind of what the optimizer makes a decision on which is, what kind of configurations should we be picking for this specific camera? And then, after this comes the resource manager where it looks at not just one specific pipeline, but say a bunch of, you know, pipelines together. Presumably a city has hundreds of cameras and all of them are sort of like running together in a cluster, so to speak. So, if you have these resources, how is it that I would distribute these resources across these different pipelines? Clearly, I want to be giving resources to those pipelines that need the most to get the sufficient amount of accuracy that they need. So, this is kind of what the resource manager does. And once this is done, we move to an important portion of the project where video analytics intersects with edge computing where we are not going to be doing video analytics, especially at scale, if all the videos have to come to a single, central place in the cloud. So, video analytics has to be split across a bunch of smart cameras, some edge clusters in-between, and then the cloud. So, after the resource manager does its thing, we move on to this edge cloud executor that splits these video analytics pipelines between the various edge clusters and the cloud. And finally, what we do is, hey, we are running live video analytics already. Can we use the results from this to just index the times when, say, a car showed up and an accident happened, or a near accident happened, store them, so that later on, if you ask me a query on say, in the last month or the last two months, can you find me the times when there were accidents? You don’t have to process the entire set of videos at the time you are asking the query. You should just be able to pull it off an index. So, this way we can actually answer queries and even stored videos, like almost like three hours of magnitude faster than otherwise.
Host: All right. So, you’ve partnered with, say, the City of Bellevue and it’s actually – was it a research project or have you implemented this or deployed this any time, any place?
Ganesh Ananthanarayanan: We’ve actually implemented, deployed it and had it running for a long time. So we went through this entire thing where we built the system, we produced, say, live counts, alerts and a whole bunch of stats for the traffic, we built a dashboard that displayed all these counts live, in real time, including alerts, and then this dashboard, running live in the City of Bellevue’s traffic management center. And so yeah, we wanted to do this entire thing where we went beyond just, you know, saying, hey, this is the part that I’m interested in.
Host: Yeah.
Ganesh Ananthanarayanan: Instead looking at it holistically like, what does it take to build the end-to-end product, if you would, for something like this?
Host: Well, right about now I want to tell our listeners that they ought to go to the website and look under your name. There are talks. There are videos. There are projects and papers all about this. It’s a very visual thing that you are doing and there’s a lot of video on there that people can see this in action. It’s really cool. I looked at it. In fact, when I was looking at the Vision Zero in Bellevue, I was actually looking for my own car. Had they caught me on camera?
Ganesh Ananthanarayanan: Yeah, like one of the cool things I do sometimes when I drive around with friends is – so one of the cameras we analyze is right next to Lincoln Square…
Host: Yeah.
Ganesh Ananthanarayanan: …in Bellevue. And it’s cool to point them to that camera and say, hey, that’s, you know, one of the cameras that we are analyzing, so…
Host: So, smile!
Ganesh Ananthanarayanan: So, smile. We’re just on camera, yeah…
Host: Selfie! All right. Well let’s talk about some of the other cool work you’re doing. One of the attributes of next-gen data analytics is that workloads are no longer confined to one datacenter but spread out across many datacenters and edge clusters around the world. And this poses a host of challenges for systems and networking researchers. So, tell us about the challenges and how you’re tackling them through your work on geo-distributed data analytics.
Ganesh Ananthanarayanan: So, my background during my PhD centered a lot on big data analytics. We did some of the earliest work in terms of straggler mitigation in big data analytics which is kind of important to, you know, maintain a whole bunch of service-level objectives for these big data jobs. And after that, we’re starting to think, like, you know, what is like the next generation of data analytics going to look like? And we saw that, hey, these companies like Microsoft, you know, the Azure infrastructure is sprawling, it’s not just a single datacenter, but spread out worldwide. And these are not just datacenters, there are a whole bunch of other edge clusters and so forth. And in many ways, you know, what we figured was that, hey, analyzing all this data that is sort of like geo-distributed now, is the next sort of like frontier, if you would, for big data analytics. And so, what we sort of ended up with was two key results, or accomplishments. First was that, we needed to do this data analytics without necessarily aggregating all the data to a single place, because if you would aggregate all the data to a single place, the network would be too much of a bottleneck and that’s not the way we should be doing it. And the second thing we wanted to do was to make sure that queries, and the whole infrastructure that we ran in a single datacenter, should automatically crossover when you run it across a cluster of datacenters as well. So, we had this abstraction of sort of like an internet-spanning cluster, again, of all these different, you know, datacenters and edge clusters worldwide. And so, then we designed a bunch of solutions that essentially dealt with the enormous heterogeneity that you would see in these different clusters, the different network uplinks and downlinks. But the cool thing was that you could write the same query and it will run both in a single datacenter as well as distributed across many datacenters and edge clusters. And this, in many ways, this work, the way we’re thinking about it, was what’s also, you know, to me, personally led to this topic of video analytics because I was thinking more and more about it and it started becoming clear to me that one of the biggest sources of distributed data that’s generated is going to be cameras. So how is it that you are going to analyze all these camera streams without bringing it all to a single datacenter?
Host: All right, moving on. My life is literally dependent on the rather tenuous idea that all my technologies are going to get along with each other all the time. Of course, everything doesn’t always go perfectly. Things go wrong. And that’s why there’s people like you to work on systems and networking problems that we face. And, as we talked earlier, it’s like I don’t even often know who to call anymore. Is it Amazon, is it Comcast, is it Microsoft, is it AT&T, is it you know, my internet host? I don’t know. So, talk about your work in wide area networking and how it might help all of us, especially as we depend on our networked technologies in an increasingly mission-critical world.
Ganesh Ananthanarayanan: Sure! So, I focus on this wide area part of the network, more specifically, the part of the network that’s between the client and the cloud. And I especially see that, with the enormous amount of work that’s happened, most like in the last decade, in terms of the networking inside the datacenter and say, connecting our own datacenters, that the client-cloud part is the weakest link and that’s kind of what we really need to address and take care of.
Host: Okay.
Ganesh Ananthanarayanan: And that’s kind of interesting you also brought up this thing about, you know, whom to call? Because one of the projects that we are actually working with right now with Azure Networking is this problem where, when connectivity is bad between our clients to any of our datacenters, who is the culprit? Whom do we blame? We actually call the project, “Blame It.” So, if you think about it, between the client and us, there are a bunch of autonomous systems or AS’s in-between. And internet is designed for these autonomous systems to function autonomously. So, it’s kind of an interesting, challenging problem in terms of how is it that we zone in without necessarily owning all the pieces between us…
Host: Right.
Ganesh Ananthanarayanan: … and the client? So, this is work that we’ve been doing with Azure Networking. We’ve deployed initial parts of it and so yeah, we are excited about where it can progress.
Host: Okay. So, without, sort of, signing an NDA, tell me what you are doing. Technically how are you – I mean, are you employing machine learning and AI techniques to – because this is something no human – not even a bunch of humans that have brains like yours – could do.
Ganesh Ananthanarayanan: So, in fact, it was interesting, since you brought up machine learning, is the way we got interested in this was that, a couple of years back, we were working with the Skype team in terms of relaying Skype calls. Say the caller say is in Seattle, the callee is somewhere in India, in Bangalore. They could get into the relays, say, in Bellevue, go through the Microsoft backbone and then exit at some point close to the callee. And so, it was sort of like a problem about which path to choose. In many ways, we map this to a multi-armed bandit formulation…
Host: Interesting.
Ganesh Ananthanarayanan: Where we could sort of like explore all the parts that are out there for their different performance characteristics and then exploit the one that is the best and can continuously do it.
Host: Well, while we are on that subject of machine learning and systems and networking, let’s talk a little bit about your “split personality.” At least part of you is a networking researcher, but the other part of you is interested in machine learning techniques and so on. So how do you bring your “selves” together and how does this inform the future of networking research?
Ganesh Ananthanarayanan: I primarily see myself as a systems and networking person, that’s kind of what my training has been, a bunch of my work has been in, as well, so what I do is bring the systems and networking lens to a bunch of other problems, like, specifically these days, with vision and ML, where, how is it that you can make something more expressable and easy? How is it that you can make something more cheap? How is it that you could make it more reliable? So that helps where we could get, like, this bunch of tradeoffs that I had mentioned earlier in our conversation for video analytics, for instance. And so, that’s the direction I come from. But there is also this other direction which is often kind of useful, too, which is a whole lot of problems that are there in the systems and networking space could actually benefit if you can take a look at it from an AI lens or an ML lens. I had mentioned about the bandit formulation. And this is a sort of like a longstanding problem, where how is it that you choose the best path between, you know, two hosts on the internet? I remember we had a panel at the Microsoft Faculty Summit last year where we hosted a bunch of people on this topic. I think, if I recall right, the topic of the panel was, “The Good, Bad and the Ugly of ML in Network Systems.” Yeah. So, we were debating about the set of topics where ML is appropriate in a networking context. And I recall one of the points that we had discussed was that, it’s good to have it in cases where it’s computationally intractable, like say scheduling or query planning. And I remember we also had an Azure person on the panel and his point was also that we also need to get to a point where the results are explainable because if something breaks in production, the last thing I want to hear is that, hey, I don’t know why this happened. So yeah, but you know, a bunch of people have already shown the potential of applying ML to systems. And what I think is, the journey ahead and the possibilities ahead, is quite exciting of what we can apply and do.
(music plays)
Host: All right. Well we’ve reached the point in the podcast where I ask, “what could possibly go wrong?” And I think we have to go back to the phrase, “Cameras are everywhere.” We’ve talked about the upside and how you’re working on applications that help people, and I think we can all see the benefits there. But as we know, with every powerful technology, there’s always a downside. And some people would argue that you are creating – or if not creating, equipping – Big Brother. So, is there anything about your work that keeps you up at night, Ganesh?
Ganesh Ananthanarayanan: This definitely keeps me up at night, in that, hey, by making video analytics cheaper is it that we’re enabling more and more people to be surveilled? So, I look at this from the paradigm of mechanism and policy, where I see what I’m doing as mechanisms for doing something. And then there is a whole set of policies that sort of like explain and stipulate what can be done and what cannot be done. And so, it’s heartening to me that this debate around policy has, in many ways, already begun. People are already taking it quite seriously. You know, like, for instance, we tried to work with the Microsoft surveillance system and, while on one hand it was frustrating we could not get access to those feeds, on the other hand, it made me feel good about the fact that, hey, somebody is taking it very seriously that even with Microsoft employees, these are not videos that would just be allowed to, you know…
Host: …be data for research.
Ganesh Ananthanarayanan: Yeah. So, I think more work needs to be done on that. But I think we’re thinking about it the right way in terms of saying what is okay to be done and what is not okay to be done.
Host: So, how do you deal with the fact that you’ve got these raw feeds that I would presume has all the information on them. They’re are not doing little bars across the eyes while the camera is rolling…
Ganesh Ananthanarayanan: And this touches to this point about edge computing. Edge computing gives this powerful mechanism to do these controls as long as the video is being processed at your edge, it is not leaving your premises, as such. So that’s a great place where the raw video can come in. You can say, for instance, obscure or obfuscate any sort of sensitive information…
Host: Mmm-hmm.
Ganesh Ananthanarayanan: …and then re-encode it and send it out so that whatever goes out already has a bunch of these PIIs stripped out.
Host: PIIs?
Ganesh Ananthanarayanan: Uh, personally identifiable information. Once they are out, then what we have in the cloud is much less sensitive, if you would. And then, of course, on top of it, if you add policies in terms of what you can do and cannot do, then I think, yeah, we have designed something that’s not necessarily going to enable Big Brother, but enable things that we think are constructive and useful to society, yeah.
Host: Right.
Ganesh Ananthanarayanan: One of the works that we are doing right now actually is using secure enclaves for video analytics, even in the cloud, so that the only person who would know what’s happening inside, is the person that’s actually running it. Even the cloud operator, or the operating system, would not be able to immediately snoop into what’s happening.
Host: Well it’s story time, Ganesh. And again, you have a rather entertaining one. Tell us a bit about yourself. What got you interested in systems and networking research? Where did you start and how did you end up here at Microsoft Research?
Ganesh Ananthanarayanan: I dabbled in a few things during my undergrad, like anybody would in terms of saying, what is it that really interested me? And uh, you know, it kind of got to this conclusion where, at the end of my undergrad, I thought, hey, I wanted to do systems and networking. And it was, of course, in a broad sense. And so, in the place I did my undergrad, we had this requirement where the last full semester, we had to do an internship. And so, the forms that you fill in, I had filled in MS R&D. It was just called Microsoft R&D in Bangalore. And I was kind of excited because I thought, hey, OK, so this is a lab that I’ve heard of, you know. Some of my friends had interned in Microsoft Research during their undergrad. And I think, okay so this sounds good, and, you know, it should give me a chance to try out things. But uh, that didn’t exactly go as expected. After I landed there, I realized that this was actually a global technical support center. This was, uh… while it’s a hugely important job…
Host: Absolutely!
Ganesh Ananthanarayanan: …it was, as I said, not expected, in terms of what I wanted to do.
Host: Right.
Ganesh Ananthanarayanan: But at the same time, I also figured that, you know, Dr. Anandan, who was then the head of the MSR India lab, he had just opened the lab then and I saw his press conferences and so forth. And so, what I did was, I found where his office was, and I went over to his office and, as the way he puts it, uh, you know, I hired myself! In that I essentially told him that, hey, I would like to try my hand at research, you know, can you give us a chance at it? And that turned out to be pretty good. I remember at that point, he said okay, come on in, and we started working with him and his team. That was kind of cool. And so, at that point on, I, you know, I personally also decided, why don’t I spend, you know, a little more time at this place after my internship ended? And so, yeah, that got me interested in systems and networking, then I went to Berkeley for my PhD and then here in Redmond.
Host: Tell us one thing about yourself, something interesting, a characteristic, a life event, a trait, a personal quirk, an interest of yours, that we might not know about you and how it has influenced your career as a researcher.
Ganesh Ananthanarayanan: I can relate to the game of cricket. There is a form of cricket called Test Cricket and, believe it or not, it’s for five days! So, it’s a really long game – or a sport, if you would – and the game, as such, is split, you know, over five days. Each day has three sessions. And usually the lesson I took away from cricket was that, if you want to win the entire test match, that you don’t formulate every session in a win/lose manner, in that, you know, at the end of each session, just make sure you’re building up towards winning the match, but don’t take any of the short-term things too, you know, seriously, both in a positive sense or a negative sense, because that doesn’t eventually lead to you winning the match. You win the match if you keep the eye on the ball for the entire five days. So yeah, that helps me, kind of, in many ways, balance between what I do short-term, what happens in the short-term, both positive and negative, but always make sure I try to think for what it is in the long-term, or the test match, if you would, yeah.
Host: So, research as test match.
Ganesh Ananthanarayanan: Research as a cricket test match, yeah!
Host: Well, at the end of every podcast, I give my guests a chance to say anything they want to our listeners, some of whom might just be getting their feet wet in systems and networking research. So, it could be something helpful, inspirational, cautionary, encouraging, profound, whatever. You get the last word. What would you want to say?
Ganesh Ananthanarayanan: To people that are getting their feet wet in systems and networking… I can pass on lessons that I’ve learned from the many illustrious people I have worked with. It’s always important, when you choose a problem, to look at something that is relevant. Ask, what’s the impact? Whose life if it going to make better? And often, you know, I’ve seen that in cases when I have done it, that things turn out to be much better than the cases when I have not done it. So, that’s a very handy thing to have as a question. While designing a solution, something I always try to do is try to keep the solution simple. Do not complicate the solution. There is no great heroism in having a complicated solution. Simple is always better. And seek out collaborators. There’s no better way to amplify your work. They have a huge multiplying effect when you have good collaborators. That’s yeah, so that’s what I would tell, to anyone who is starting out.
Host: Occam’s Razor with a great team!
Ganesh Ananthanarayanan: Yeah, that would be a way, yeah!
Host: Ganesh Ananthanarayanan, thank you for joining us on the podcast today.
Ganesh Ananthanarayanan: Thanks for having me over, it was fun.
(music plays)
Host: To learn more about Dr. Ganesh Ananthanarayanan and the latest in systems, networking and mobility research, visit Microsoft.com/research