A Historical View of Large Margin Optimization Methods
Starting from the design of robust Hopfield works in the early 1990s to SVMs to structured output problems, a variety of methods have been proposed for the solution of optimization problems arising in large margin training. Some of these methods are for the dual while others are for the primal. This talk will give a historical view of these methods, while describing the motivations that led to their development.
Speaker Bios
Sathiya Keerthi is a principal scientist in the newly started Cloud and Information Services Lab in Microsoft. Prior to that he was with Yahoo! Research from 2004. Prior to joining Yahoo!, he worked for 10 years at the Indian Institute of Science, Bangalore, and for 5 years at the National University of Singapore. His main area of expertise is in machine learning and optimization. Over the last fifteen years his research has focused on the development of practical algorithms for machine learning. In the earlier years he did research in robotics, computer graphics and optimal control. His works on support vector machines (improved SMO algorithm), polytope distance computation (GJK algorithm) for collision detection (used popularly in computer games) and model predictive control (stability theory) are highly cited. He has published more than 100 papers in leading journals and conferences; see http://www.keerthis.com for his recent papers. His current research focuses on structured output models and fast methods for large scale data mining problems. Keerthi is an Action Editor of JMLR (Journal of Machine Learning Research).
- Date:
- Haut-parleurs:
- Sathiya Keerthi
- Affiliation:
- Microsoft