About
I am research engineer from the Microsoft 365 team working on applications of machine learning to Security and Compliance.
Prior to this role I led the engineering team on Project Springfield, a cloud service by Microsoft for finding security critical bugs in software. Springfield bundles a suite of security testing tools, including white-box fuzzing technologies from Microsoft Research as well as black-box fuzzing, and leverages the power of the Azure cloud to achieve scalability.
In my early days at Microsoft I worked in the Windows Compatibility team on various projects including: static code analysis, automated root-cause analysis systems for Windows, Windows apps inventorying, and health predictive models for application compatibility.
Recent blog post
Gamifying machine learning for stronger security and AI models
To stay ahead of adversaries, who show no restraint in adopting tools and techniques that can help them attain their goals, Microsoft continues to harness AI and machine learning to solve security challenges. One area we’ve been experimenting on is autonomous systems. In a simulated enterprise network, we examine how autonomous agents, which are intelligent systems that independently carry out a set of operations using certain knowledge or parameters, interact within the environment and study how reinforcement learning techniques can be applied to improve security. Today, we’d like to share some results from these experiments. We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts. The toolkit uses the Python-based OpenAI Gym interface to allow training of automated agents using reinforcement learning algorithms. The code is available here: https://github.com/microsoft/CyberBattleSim