abstract pattern on dark purple background

Microsoft Research Lab – India

Podcast: HAMS- Using Smartphones to Make Roads Safer. With Dr. Venkat Padmanabhan and Dr. Akshay Nambi

Partagez cette page

Episode 013 | June 14, 2022

a couple of people that are standing in the grass

Road safety is a very serious public health issue across the world. Estimates put the traffic related death toll at approximately 1.35 million fatalities every year, and the World Health Organization ranks road injuries in the top 10 leading causes of death globally. This raises the question- can we do anything to improve road safety? In this podcast, I speak to Venkat Padmanabhan, Deputy Managing Director of Microsoft Research India and Akshay Nambi, Principal Researcher at MSR India. Venkat and Akshay talk about a research project called Harnessing Automobiles for Safety, or HAMS. The project seeks to use low-cost sensing devices to construct a virtual harness for vehicles that can help monitor the state of the driver and how the vehicle is being driven in the context of the road environment it is in. We talk about the motivation behind HAMS, its evolution, its deployment in the real world and the impact it is already having, as well as their future plans.

Venkat Padmanabhan is Deputy Managing Director at Microsoft Research India in Bengaluru. He was previously with Microsoft Research Redmond, USA for nearly 9 years. Venkat’s research interests are broadly in networked and mobile computing systems, and his work over the years has led to highly-cited papers and paper awards, technology transfers within Microsoft, and also industry impact. He has received several awards and recognitions, including the Shanti Swarup Bhatnagar Prize in 2016, four test-of-time paper awards from ACM SIGMOBILE, ACM SIGMM, and ACM SenSys, and several best paper awards. He was also among those recognized with the SIGCOMM Networking Systems Award 2020, for contributions to the ns family of network simulators. Venkat holds a B.Tech. from IIT Delhi (from where he received the Distinguished Alumnus award in 2018) and an M.S. and a Ph.D. from UC Berkeley, all in Computer Science, and has been elected a Fellow of the INAE, the IEEE, and the ACM. He is an adjunct professor at the Indian Institute of Science and was previously an affiliate faculty member at the University of Washington. He can be reached online at http://research.microsoft.com/~padmanab/ (opens in new tab).

Akshay Nambi is a Principal Researcher at Microsoft Research India. His research interests lie at the intersection of Systems and Technology for Emerging Markets broadly in the areas of AI, IoT, and Edge Computing. He is particularly interested in building affordable, reliable, and scalable IoT devices to address various societal challenges. His recent projects are focused on improving data quality in low-cost IoT sensors and enhancing performance of DNNs on resource-constrained edge devices. Previously, he spent two years at Microsoft Research as a post-doctoral scholar and he has completed his PhD from the Delft University of Technology (TUDelft) in the Netherlands.

More information on the HAMS project is here: HAMS: Harnessing AutoMobiles for Safety – Microsoft Research

For more information about the Microsoft Research India click here.

Related

Transcript

Venkat Padmanabhan: There’s hundreds of thousands of deaths and many more injuries happening in the country every year because of road accidents. And of course it’s a global problem and the global problem is even bigger. The state of license testing is as that by some estimates of public reports, over 50% of license are issued without a test or a proper test. So we believe a system like HAMS that improves the integrity of the testing process has huge potential to make a positive difference.

[Music]

Sridhar Vedantham: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham.

[Music]

Sridhar Vedantham: Road safety is a very serious public health issue across the world. Estimates put the traffic related death toll at approximately 1.35 million fatalities every year, and the World Health Organization ranks road injuries in the top 10 leading causes of death globally. This raises the question- can we do anything to improve road safety? In this podcast, I speak to Venkat Padmanabhan, Deputy Managing Director of Microsoft Research India and Akshay Nambi, Principal Researcher at MSR India. Venkat and Akshay talk about a research project called Harnessing Automobiles for Safety, or HAMS. The project seeks to use low-cost sensing devices to construct a virtual harness for vehicles that can help monitor the state of the driver and how the vehicle is being driven in the context of the road environment it is in. We talk about the motivation behind HAMS, its evolution, its deployment in the real world and the impact it is already having, as well as their future plans.

[Music]

Sridhar Vedantham: Venkat and Akshay, welcome to the podcast. I think this is going to be quite an interesting one.

Venkat Padmanabhan: Hello Sridhar, nice to be here.

Akshay Nambi: Yeah, hello Sridhar, nice to be here.

Sridhar Vedantham: And Akshay is of course officially a veteran of the podcast now since it’s your second time.

Akshay Nambi: Yes, but the first time in person so looking forward to it.

Sridhar Vedantham: Yes, in fact I am looking forward to this too. It’s great to do these things in person instead of sitting virtually and not being able to connect physically at all.

Akshay Nambi: Definitely.

Sridhar Vedantham: Cool, so we’re going to be talking about a project that Venkat and you are working on, and this is something called HAMS. To start with, can you tell us what HAMS means or what it stands for, and a very brief introduction into the project itself?

Venkat Padmanabhan: Sure, I can take a crack at it. HAMS stands for Harnessing Automobiles for Safety. In a nutshell, it’s a system that uses a smartphone to monitor a driver and their driving, with a view to improving safety. So we look at things like the state of the driver, where they’re looking, whether they’re distracted, and so on. That’s sort of looking at the driver. But we also look at the driving environment, because we think, to truly attack the problem of safety, you need to have both the internal context inside the vehicle as well as the external context. So that’s the sort of brief description of what HAMS tries to do.

Sridhar Vedantham: Ok. So, you spoke about a couple of things here, right? One is the safety aspect of, you know, driving both internal and external. When you’re talking about this, can you be more concise? And especially, how did this kind of consideration feed into, say, the motivation or the inspiration behind HAMS?

Akshay Nambi: Yeah, so as you know, road safety is a major concern, not just in India globally, right?  And when you look at the factors affecting roads safety, there is the vehicle, there’s the infrastructure and the driver. And majority of the instance today focus on the driver. For instance, the key factors affecting road safety includes over speeding, driving without seatbelts, drowsy driving, drunken driving. All centering around the driver. And that kind of started that was motivating towards looking at the driver more carefully, which is where we build the system HAMS, which focuses on monitoring the driver and also how he’s driving.

Sridhar Vedantham: And India in particular has an extremely high rate of deaths per year, right, in terms of in terms of roads accidents.

Akshay Nambi: Yes, it’s on the top list. In fact, around 80,000 to 1.5 lakh people die every year according to the stats from the government. Yeah, it’s an alarming thing and hopefully we are doing baby steps to improve that.

Venkat Padmanabhan: In fact, if I may add to that, if you look at the causes of death, not just road accidents, diseases and so on, road accidents are in the top 10. And if you look at the younger population, you know people under 35 or 40, it’s perhaps in the top two or three. So it is a public health issue as well.

Sridhar Vedantham: And that’s scary. Ok, so how does this project actually work? I mean, the technology and the research that you guys developed and the research that’s gone into it. Talk to us a little bit about that.

Venkat Padmanabhan: Sure yeah, let me actually wind back, maybe 10-15 years to sort of when we first started on this journey, and then talk more specifically about HAMS and what’s happened more recently. Smartphones, as you know, have been around for maybe 15 years. A bit longer maybe. And when smartphones started emerging in the mid 2000s and late 2000s, we got quite interested in the possibility of using a smartphone as a sensor for, you know, road monitoring, driving monitoring and so on. And we built a system here at Microsoft Research India back in 2007-08, it’s called Nericell, where we used a leading-edge smartphone of that era to do sensing. But it turned out that the hardware then was quite limited in its capabilities in terms of sensors, even accelerometer was not there. We had to pair an external accelerometer and so on. And so the ability for us to scale that system and really have interesting things come out of it was quite limited. Fast forward, about 10 years, not only did smartphone hardware get much better, AI and machine learning models that could process this information became much better and among the new sensors in the newer edge smartphones or the cameras, the front camera and the back camera. And machine learning models for computer vision have made tremendous progress. So that combination allowed us to do far more interesting things than we were able to back then. Maybe Akshay can talk a bit more about the specific AI models and so on that we built.

Akshay Nambi: Yeah, so if you compare the systems in the past to HAMS, what was missing was the context. In the past, systems like what Venkat mentioned- Nericell, right, it was correcting the sensor data, but it was lacking context. For example, it could tell did the driver did this rash braking or not, but it could not tell, did he do it because somebody jumped in front of the vehicle, or was he distracted? These cameras new smartphones have can provide this context, which makes these systems much more capable and can provide valuable insights. And in terms of specific technology itself, we go with commodity smartphones, which have multiple cameras today. The front camera looking at the driver, the back camera looking at the road, and we have built numerous AI models to track the driver state, which includes driver fatigue and driver gaze, where the driver is actually looking. And also with the back camera we look at how the driver is driving with respect to the environment. That is, is he over speeding, is he driving on the wrong side of the road and so on.

Sridhar Vedantham: So, this is all happening in real time.

Akshay Nambi: The system can support both real time and also offline processing. And as you know smartphones are intelligent edge devices, but still they have limited processing power. So, we decide what some of the capabilities should run in real time, and some can be offloaded to the cloud. Or some could be for offline processing.

Sridhar Vedantham: OK.

Venkat Padmanabhan: I want to sort of make a distinction between our ability to run things in real time, which has Akshay said, you know, many of our, actually much of our research was in making the computation inexpensive enough so that you can run real time and the user interface. So, we explicitly decided early on in our journey that we did not want a system that intervened in real time, and you know provided alerts, because the bar for that, if you will, is very high. In the sense that you don’t want to make a mistake, like if you alert a driver to driver and that alert is actually a mistake, you might actually cause an accident, right? And since we were shooting for a low-cost system with just a smartphone and so on, it did not seem like a reasonable thing to sort of aim for that. What we really aim for is processing that is efficient, that actually doesn’t overheat the phone, you know, the processing can sometimes just cause a smartphone to meltdown. But at the same time, depending on the context and we’ll get to, I guess our driver testing application soon hopefully, we can offload computation to either a more capable edge device nearby or to the cloud as Akshay said. and we definitely want to leverage that. We’re not sort of bound to just the smart phone for compute.

Sridhar Vedantham: Right, so you know you spoke about the fact that you’re using commodity hardware to do all this right? And it’s always fascinated me that today’s consumer device, basically the commodity hardware that you get, is capable of doing a lot of stuff. But even then, there must have been challenges in taking you know, maybe a 25,000 Rupee or 20,000 Rupee phone just off the market and trying to get it to do things that, uh, sound like they should be running in the cloud and not on the phone, frankly. Did you have any of these challenges?

Akshay Nambi: Oh, numerous of them. So, to start with the low-cost smartphones, as you turn on cameras, most of you would have realized the phone gets heated up much quickly than in a normal setup. While the system setup itself is very simple, which is just a smartphone, to build a system on top of smartphone, there are numerous challenges. Starting with real world conditions, that is, there is different lighting as you drive in on roads, there is daytime, nighttime. How does your algorithm adapt to these conditions? There are different types of vehicles, hatchback, SUV. How does your algorithm adapt to these. Different driving seating positions, the way you mount the smartphone in the vehicle. All of these can change, right? So getting your AI models to work in such a dynamic setup is the biggest challenge. And one of the key research in HAMS is to address these challenges in a practical way. That’s been one of our key focus. Second, coming to the hardware itself, since we want to do some of these processing on the smartphone itself, you have to come up with algorithms that are efficient. Which means, today smartphone’s cameras can generate 30 frames per second. But the hardware, the compute power is not there to process all the 30 frames. So you have to come up with intelligent algorithms to decide, which frame do you want to process, which frame you want to discard, or which frame you want to apply a lower algorithm compared to a better algorithm. So there are lot of these divisions which have to go through to build this system.

Sridhar Vedantham: Right.

Venkat Padmanabhan: Just to add to what Akshay said, you know if I step back right, there are, I would say, two major pillars of our research. One is being adaptive. For example, Akshay talked about the processing being expensive. Let’s say you’re trying to do vehicle ranging. We’re trying to figure out whether the driver is tailgating, being too close to the vehicle in front. There are very good machine learning models that will do object detection, you know, find the vehicles in front of you in a in a frame and so on, but they’re expensive. Instead, we could combine that approach with a much less expensive tracking algorithm so that once you found an object, you just use the cheaper tracking algorithm for a while because the vehicle in front of you is not going to disappear. If it’s in front of you now chances are that for the next several seconds it’ll be there, so this is adaptivity. The other aspect is auto calibration or calibration in general. As Akshay said, you know, vehicle geometry, the mounting of the phone, the driver seating position, all that changes and it is not practical to recalibrate each time a driver mounts a phone and starts, obviously. So we need ways of automatically calibrating and learning system. So, I would say a lot of our technical work and all the research we’ve published falls in these two buckets.

[Music]

Sridhar Vedantham: One thing that either one of you mentioned I don’t remember exactly who it was, but you mentioned this thing about being able to detect if a driver is, say, drowsy. How does that actually work? Because you know it just sounds a little science fictiony to me, right? How a phone is going to be sitting there mounted on the dashboard or on the window or windshield of a car and if someone magically telling you whether the driver is sleepy or drowsy or whether the driver is doing the right thing while driving.

Venkat Padmanabhan: You’re right. I mean, knowing the true internal state of a driver is not easy, but there are outward manifestations of how tired someone is or how sleepy someone is. Obvious ones are, your eyes sort of drooping and also yawning and things like that. And so those are things that we can pick out by just having a camera looking at the driver’s face. We’re not claiming that this is 100% percent perfect in terms of knowing driver state, but these are good indicators. In fact, we have some interesting data from our work and maybe you know actually can talk about this long drive he went on and where these outward signs actually correlated with his own perception. 

Akshay Nambi: Yeah, that’s true. So there are several studies which has looked at eye blinking and yawning patterns to say the state of the driver and we developed this one and in fact we were going for an interstate travel where we deployed HAMS in our own cab. It was early morning, so the driver was just awake, and he was fresh where he was driving well and our algorithms were also detecting that he was active. And we stopped for breakfast and then the system started beeping, directing the eye blinking and yawning state was much higher and, we were in the cab and we did not notice it. And the system was able to detect. He was an experienced driver, yes, that made a lot of sense, but still he was drowsy. 

Venkat Padmanabhan: As a safety conscious person, Akshay should have stopped the cab right then, but he was part of the research project. He wanted the data so he kept going. He kept going, yeah.

Sridhar Vedantham: He is truly committed to the science. Cool, have there been other projects to do with traffic and road safety and so on anywhere in the world? And how does what you guys are doing differ from those things?

Venkat Padmanabhan: Yeah, so let me start and maybe Akshay can add to it. It’s a very rich space right. Around the same time we started Nericell in the mid-2000s, a bunch of other people started startups as well as university groups like at MIT and Princeton and so on. IIT Bombay had an active project that was looking at similar things. And then, as I said, HAMS, we did sort of about 10 years later. The biggest distinction I would say is what Akshay touched on in the beginning which is that compared to a lot of these existing systems that people use, including what insurance companies now use, at least in the western world, we have these camera sensors that allow us to get the context of a driver. I think Akshay gave an example of the driver being distracted. I’ll give you an even simpler example. Now think of something like speeding. You would imagine a speed is very easy to get. You have a GPS device, pretty accurate, that will give you speed. But when you talk about speeding and the safety issues related to that, it is not just your vehicle speed.

Sridhar Vedantham: It’s contextual.

Venkat Padmanabhan: It is how you’re doing relative to other vehicles. If you know the speed limit is 80 kilometers, others are going at 90 or 100. It is safer to go at 90 or 100 than to be 80, right?

Sridhar Vedantham: Yeah.

Venkat Padmanabhan: So that context is something that you get only with camera-based sensing and that for the most part the other work is not looking at. I would say we are perhaps among the earliest to look at that as part of the mix for this problem.

Sridhar Vedantham: Ok, and uh, I know this project is, I mean, you guys have been working on this project for about three years, four years now?

Venkat Padmanabhan: Oh, longer. We actually started in 2016.

Sridhar Vedantham: 2016.

Venkat Padmanabhan: So it’s a bit longer, yeah.

Sridhar Vedantham: Right, and I know that before we had this unfortunate pandemic, there used to be all these weird posts and signs in the basement of the office here, which I was told Akshay put up. What are those for and they told me that, I’m not supposed to run over those signs or move them and all that because there for a very important research project that Akshay is running called HAMS. What were you guys doing there in the basement?

Akshay Nambi: Right. While we spoke about various detectors in terms of understanding the driver state how he’s driving, one of the key things which we have developed Is how the driver drives. Specifically, we look at the trajectory of the driving itself. When we talk about trajectory, today you can use GPS to get the trajectory of how the vehicle is being driven. The accuracy of these GPS devices are in meters. Especially if we are now trying to understand how the driver is parking in a reverse parking position, or in a parallel parking position, you want to understand the trajectory in centimeter level. How many forwards he took, how many reverses he took. And to do that we have come up with a way where we use visual cues, which is basically the features in the environment plus some of these markers which you have seen in the basement, which provides you very accurate localization up to a few centimeters. And that’s how we were able to get the accurate trajectory. And now this trajectory can be used for various applications. One, as I said, for seeing how the driver is parking, it could be used for how the driver is driving in an open environment, and this was mainly to see how new drivers are actually driving the vehicle.

Sridhar Vedantham: OK.

Venkat Padmanabhan: You can think of these markers, I guess they’re called fiducial markers as you know, the beacons of a lighthouse. It gives you a reference point, right? So you can locate yourself accurately down to centimeter level using these as reference points.

Sridhar Vedantham: Ok. Now I also know that you’ve implemented HAMS at scale at various places. Uh, can we talk a bit about that, where it’s been implemented and what it’s being used for?

Venkat Padmanabhan: That’s a good question, Sridhar. Let me provide the broader context here, and then maybe Akshay can chime in with the details. As I said, we started HAMS in 2016. And in the in the year that followed, we looked at many different possible potential applications. For example, fleet monitoring, driver training and so on. Now, as luck would have it, we got connected with a very interesting organization called Institute for Driving and Traffic Research, which is run by Maruti Suzuki, which is India’s largest car manufacturer, and they are very much interested in issues of road safety and also being Maruti and such a big player in the vehicle business, they’re very well connected with the government. So, in late 2018 we went to the Ministry of Road Transport and Highways along with them and met with the senior bureaucrat and what was supposed to be a 15 minute meeting went on for over 2 hours because the person we met with really liked the potential of HAMS. In particular, in the context of driver testing. As you know, before you’re granted a license to drive, you’re supposed to be tested, but the reality in India is that because of the scale of the country and population, and so on, anecdotally, and even some studies have shown, that many licenses are issued without a proper test or without a test at all. Which obviously means untested and potentially unsafe drivers are on the road and they’re contributing to the scale of the problem. Now the government is alive to this problem, and they’re quite interested in technology for automation and so on. But the technology that people were using was quite expensive and therefore difficult to scale. So from this conversation we had in 2018, what emerged is that our simple smartphone based solution could be used as a very low cost, but at the same time, high coverage solution. When I say high coverage, it can actually monitor many more parameters of a drivers’ expertise in driving than the existing high-cost solutions. So that really got us started and maybe Akshay can talk about where that journey led us.

Akshay Nambi: This meeting what Venkat mentioned with the ministry in Delhi, led us to talking to the government of Uttarakhand who were looking to set up a greenfield automated license testing track in Dehradun. This was the first implementation of HAMS to provide automated license testing. And remember, this was a research project. Taking this research project to a actual deployment with the third-party organization and government being involved was a major task. It’s not just about technology transfer, it’s making the people on the ground understand the technology and run with it every day. We have spent several months with the government officials to translate the research project to a actual real-world deployment, which also included translating the parameters in their driver license testing guidelines to something which can be measured and monitored in real-world. For instance, this would mean, is the driver wearing a seat belt and the amount of time it took to complete a particular maneuver. All of these things need to be monitored by the system. And this translation was one of the biggest challenges taking their research project to a real-world deployment. This deployment went live in July 2019, where the entire test was completely automated. Automated here means that there is no inspector sitting in the vehicle. So, the inspector comes into the vehicle, deploys smartphone and the exits. You as a candidate drive within a confined area which has multiple maneuvers, and the smartphone monitors how you are driving. And has a bunch of parameters which is being defined based on which you would be marked. At the end of the test, basically, a report will be automatically generated which says which maneuvers you passed, which maneuvers you failed along with video evidence why you failed, will be provided to the candidate. And the final result will be uploaded to the central government for issuing the licenses.

Sridhar Vedantham: This is very interesting. What has been the reaction of people to this? I’m sure that when people suddenly saw that they’re going to be evaluated by a smartphone, it must have thrown them for a loop, at least in the initial stages.

Akshay Nambi: Much to our surprise, it was completely opposite. They were very much welcoming their smartphone than an inspector in the vehicle. (Laughs)

Venkat Padmanabhan: I think people trust a machine more than a person. Because they feel that person perhaps can be biased and so on, whereas the machine they just trust. In fact, the comments we got also said, you know, look, I failed the test, but I like the system.

Sridhar Vedantham: So people seem to be happy to remove the human error part of the thing. Out of the equation.

Venkat Padmanabhan: The subjectivity, right?

Sridhar Vedantham: Yeah, the subjectivity, yeah.

Venkat Padmanabhan: They feel the system is objective. The other thing I should mention, which obviously we didn’t plan for, and we didn’t anticipate. After COVID happened, this idea that you take the test without anyone else in the vehicle gained a new significance because you know things like physical distancing became the norm. You could take a test with just a smartphone and not have to worry about sitting next to an inspector or inspector worrying about sitting next to a driver.

Venkat Padmanabhan: And that was an unexpected benefit. Of our approach.

Sridhar Vedantham: Right? That’s very interesting. I never thought of this, although I’ve been tracking this project for a while.

Venkat Padmanabhan: Neither did we. It just happened and then we realize how you know, in retrospect, it was a good idea.

Sridhar Vedantham: Yeah. And what’s the road map? Where does the project go now?

Venkat Padmanabhan: Yep. Akshay talked about Dehradun deployment that happened in 2019. That really sort of caught the attention of several state governments. In fact, they sent their people to Dehradun and to see how the system was working and came back quite impressed. So there was deployments in Bihar, deployments happening in Andhra Pradesh, and some sites in Haryana and several other states that are in discussion to deploy the system. So at this point we have four RTOs that are live and with a couple of more than almost live, they’re pretty much ready to go. And about a dozen more that are in the works in various stages. But of course there are thousand RTOs in the country. So, there’s still a long way to go, and one of the challenges is that this has to proceed on a state by state basis, because it is…

Sridhar Vedantham: A state subject.

Venkat Padmanabhan: It’s a state subject, exactly. But we are working with external partners who we have enabled with the HAMS technology.

Sridhar Vedantham: Venkat, it sounds like this project has some serious potential for large societal impact.

Venkat Padmanabhan: That’s indeed the case Sridhar. In fact, we think there’s huge potential here for beneficial impact, and that’s what really been driving us. Just to give you context for the numbers. The scale of the problem we already talked about, there’s hundreds of thousands of deaths and many more injuries happening in the country every year because of road accidents. And of course, it’s a global problem and the global problem is even bigger. The state of license testing is as that by some estimates of public reports, over 50% of licenses are issued without a test or a proper test. So we believe a system like HAMS that improves the integrity of the testing process has huge potential to make a positive difference. Now, where we are in the journey today is that we have done about 28,000 automated license tests using HAMS across all these states where it’s been deployed. But an estimated 10 million or more license tests or licenses are issued in the country every year. So we think that by scaling to the 1000 plus RTOs that I talked about earlier, we can actually, potentially touch a majority, or perhaps even all of these license tests that are happening and license being issued, and thereby have much safer roads in the country by having, you know, drivers are well tested and really ready to drive. Making the only ones who are on the road.

Sridhar Vedantham: Fantastic, now we are coming towards another podcast. Are there any thoughts that you’d like to leave the listeners with before we wind up?

Venkat Padmanabhan: Sure, I can share something and then maybe Akshay can add to it as well. I would say you know stepping back from the specific use case of HAMS and you know road safety and so on, what this experience has taught us is that, if you take technology and mate it with a significant, a societally significant problem. In this case, road safety, but really understand the problem, work with partners like we did. You know, we worked with IDTR, the Maruti IDTR, we worked with other external partners, talked to the government, multiple governments at the center and the state, and so on. Really understand the problem, understand technology and bring it together in a meaningful way, we can make a huge difference, and that’s really quite inspirational for us because it tells us that there’s a lot of good we can do as technologists and researchers.

Akshay Nambi: Nothing much to add to Venkat, he nicely summed it up. But I think one just minor point would be that we don’t have to look for problems elsewhere. Problems are just right next to us. And picking up these societal impact problems, have lot of value.

Sridhar Vedantham: OK, fantastic thank you both for your time.

Venkat Padmanabhan: Thanks Sridhar. It’s been a pleasure.

Akshay Nambi: Thanks Sridhar, this was very good.