Welcome to Probably Approximately Correct Learners Episode 3! In this episode, Chara chats with Professor Sam Hopkins.
Sam is a theoretical computer scientist and Assistant Professor at MIT, in the Theory of Computing group in the Department of Electrical Engineering and Computer Science, where he holds the Jamieson Career Development Chair. His interests include algorithms, theory of machine learning, semidefinite programming, sum of squares method, and bicycles.
Before MIT, he was a Miller fellow in the theory of computing group at UC Berkeley, hosted by Prasad Raghavendra and Luca Trevisan. Before that, he got his PhD at Cornell, advised by David Steurer.
Welcome to our second Probably Approximately Correct Learners episode! In this episode, Chara chats with Professor Clément Canonne.
Clément Canonne is a Senior Lecturer in the School of Computer Science of the University of Sydney, an ARC DECRA Fellow, and a 2023 NSW Young Tall Poppy. He obtained his Ph.D. in 2017 from Columbia University, before joining Stanford as a Motwani Postdoctoral Fellow, then IBM Research as a Goldstine Postdoctoral Fellow. His research interests span distribution testing and learning theory; focusing, in particular, on differential privacy, and the computational aspects of learning and statistical inference subject to resource or information constraints. He really likes elephants and wombats.
Welcome to our first ever Probably Approximately Correct Learners episode! In this episode, Chara chats with Professor Jamie Morgenstern (UW).
Jamie is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She was previously an assistant professor in the School of Computer Science at Georgia Tech. Prior to starting as faculty, she was hosted by Michael Kearns, Aaron Roth, and Rakesh Vohra as a Warren Center fellow at the University of Pennsylvania. She completed her PhD working with Avrim Blum at Carnegie Mellon University. She studies the social impact of machine learning and the impact of social behavior on ML's guarantees. For example, how should machine learning be made robust to behavior of the people generating training or test data for it? And, how should ensure that the models we design do not exacerbate inequalities already present in society? You can find more information about Jamie and her research on her website: https://jamiemorgenstern.com/.
Jamie and Chara talked about this paper: https://arxiv.org/abs/2206.02667.