Gautam Goel

Simons Institute for the Theory of Computing
University of California at Berkeley

Email: ggoel@berkeley.edu


I am a postdoctoral researcher at the Simons Institute at UC Berkeley, where I am part of the the Foundations of Data Science Institute (FODSI). I am broadly interested in machine learning, optimization, signal processing, and control, especially 1) online learning and learning theory, and 2) integrating machine learning with signal processing and control.

Before moving to Berkeley I was a PhD student in the Computing and Mathematical Sciences (CMS) department at Caltech, where I was extremely fortunate to be advised by Babak Hassibi. My PhD work was supported by a National Science Foundation Graduate Research Fellowship and an Amazon AI4Science Fellowship. My thesis was awarded the Bhansali Family Doctoral Prize in Computer Science, which is awarded by the CMS department to a single outstanding dissertation in computer science each year.


Papers. Sorted in decreasing chronological order.

  1. Best of Both Worlds in Online Control: Competitive Ratio and Policy Regret with Naman Agarwal, Karan Singh, and Elad Hazan. Preprint. [arXiv]

  2. Measurement-Feedback Control with Optimal Data-Dependent Regret with Babak Hassibi. Preprint. [arXiv]

  3. Online Estimation and Control with Optimal Pathlength Regret with Babak Hassibi. L4DC 2022. [arXiv]

  4. Competitive Control with Babak Hassibi. Transactions of Automatic Control. [arXiv]

  5. The Power of Linear Controllers in LQR Control with Babak Hassibi. CDC 2022 (Invited Session on Non-Asymptotic Learning and Control of Dynamical Systems). [arXiv]

  6. Regret-Optimal Estimation and Control with Babak Hassibi. Transactions of Automatic Control (Special Issue on Learning and Control). [arXiv]

  7. Regret-Optimal Full-Information Control with Oron Sabag, Sahin Lale, and Babak Hassibi. ACC 2021. [arXiv]

  8. Regret-Optimal Measurement-Feedback Control with Babak Hassibi. L4DC 2021. [arXiv]

  9. Online Optimization with Predictions and Non-Convex Losses with Yiheng Lin and Adam Wierman. SIGMETRICS 2020. [arXiv]

  10. Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization with Yiheng Lin. Haoyuan Sun, and Adam Wierman. NeurIPS 2019 (Spotlight Presentation). [arXiv]

  11. An Online Algorithm for Smoothed Regression and LQR Control with Adam Wierman. AISTATS 2019. [arXiv]

  12. Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent with Niangjun Chen and Adam Wierman. COLT 2018. [arXiv]

  13. Thinking Fast and Slow: Optimization Decomposition across Timescales with Niangjun Chen and Adam Wierman. CDC 2017. [arXiv]