关于
I am a Senior Researcher in M365 Research team in E+D and I am physically based out of MSR-India (Bangalore). Our research aims to adapt AI/ML approaches to address complex systems level challenges and bottlenecks at scale in cloud/edge platforms in collaboration with other regions, labs and product teams. In particular, I drive some of the work on solving decision making problems via optimal control or reinforcement learning etc. in the space of AI-Ops. We also focus on machine learning model optimization for efficient inferencing across the board at the scale of the whole fleet of M365 cloud resources.
I earned my PhD in Computer Science, with focus on Artificial Intelligence, and Machine Learning, from University of Texas at Dallas with Dr. Sriraam Natarajan as my advisor. My thesis work centered around Human-AI collaborative planning and learning in an uncertain, structured/unstructured world and approximation techniques in such domains. After I graduated my doctoral studies I worked for a couple of years in Samsung R&D before joining Microsoft Research.
Beyond what I do at Microsoft, I also try to be of service to the scientific community. I am in the Program Committee of several conferences including AAAI, IJCAI, SDM, ICLR, NeurIPS, ICML and so on, and I serve as a reviewer in several journals, including Knowledge Based Systems (Elsevier), MLJ, JAIR, Frontiers in Robotics & AI and so on.
精选内容
The AutoML Podcast: Smart NAS via Co-Regulated Shaping Reinforcement
Conversation between Adam Boaz Becker and Mayukh Das about using neural architecture search for resource-constrained devices and about a new multi-objective reinforcement-learning based framework that he recently published called AUTOCOMET. Covering such topics as how NAS research is done both at Samsung and at Microsoft, the relationship between NAS and product teams, devices and the various types of constraints they expose, how to featurize hardware contexts, layer-wise latency calculations, surrogate models and the kinds of hardware-aware data they require, the current limitations of NAS, reinforcement learning and NAS, multi-objective optimization in the context of reinforcement learning, reward sparsity, reward shaping and shaping functions, primary and secondary rewards, their concept of co-regulated shaping, Q functions and the effects of potentials, AUTOCOMET, the future of NAS and other topics.