Learning, Mixing, and Complexity- A Free Ride on the Second Law

The principle of maximum entropy states that, given some data, among all hypothetical probability distributions that agree with the data, the one of maximum entropy best represents the current state of knowledge. It might be natural to expect that this philosophy often yields a “simple” hypothesis since it tries to avoid making the hypothesis more informative than it deserves to be.

Viewing entropy as an additional resource to be optimized is an extremely powerful idea with a wide range of applications (and a correspondingly large array of names: boosting, entropy-regularized gradient descent, multiplicative weights update, log-Sobolev inequalities, Gibbs measures, etc.).

I will focus specifically on the role of entropy maximization in encouraging simplicity. This has a number of surprising applications in discrete mathematics and the theory of computation. We’ll see three instantiations of this principle: in additive number theory, functional analysis, and complexity theory. For the last application, it will turn out that one needs to extend max-entropy to the setting of quantum information and von Neumann entropy.

The philosophy and applications will be discussed at a high level suitable for a general scientific audience.

发言人详细信息

James R. Lee is an Associate Professor of Computer Science at the University of Washington. His research leverages tools from probability and analysis to attack fundamental problems in discrete mathematics, algorithms, and complexity theory. His work has been recognized by an NSF CAREER award, a Sloan Research Fellowship, and recently a best paper award at STOC 2015 (with P. Raghavendra and D. Steurer) for an application of entropy maximization to computational lower bounds.

日期:
演讲者:
James R. Lee
所属机构:
University of Washington