Learning with Limited Supervision
Stefano Ermon is an Assistant Professor of Computer Science in the CS Department at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory, and a fellow of the Woods Institute for the Environment. His research is centered on techniques for probabilistic modeling of data, inference, and optimization, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. He has won several awards, including four Best Paper Awards (AAAI, UAI and CP), a NSF Career Award, ONR and AFOSR Young Investigator Awards, a Sony Faculty Innovation Award, an AWS Machine Learning Award, a Hellman Faculty Fellowship, Microsoft Research Fellowship, and the IJCAI Computers and Thought Award. Stefano earned his Ph.D. in Computer Science at Cornell University in 2015.
Many of the recent successes of machine learning have been characterized by the availability of large quantities of labeled data. Nonetheless, we observe that humans are often able to learn with very few labeled examples or with only high level instructions for how a task should be performed. In this talk, I will present some new generative modeling approaches for learning useful models in contexts where labeled training data is scarce or not available at all. Finally, I will discuss ways to use prior knowledge (such as physics laws or simulators) to provide weak forms of supervision.