HELLO!

I am an assistant professor at McGill University in the Mathematics and Statistics department . I am a CIFAR AI Chair and I am an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Moreover I am the lead organizer of the OPT-ML Workshop for NeurIPS 2020. Previously, I was a research scientist at Google Brain, Montreal. You can view my CV here if you are interested in more details.

I received my Ph.D. from the Mathematics department at the University of Washington (2017) under Prof. Dmitriy Drusvyatskiy then I held a postdoctoral position in the Industrial and Systems Engineering at Lehigh University where I worked with Prof. Katya Scheinberg. I held an NSF postdoctoral fellowship (2018-2019) under Prof. Stephen Vavasis in the Combinatorics and Optimization Department at the University of Waterloo.

My research broadly focuses on designing and analyzing algorithms for large-scale optimization problems, motivated by applications in data science. The techniques I use draw from a variety of fields including probability, complexity theory, and convex and nonsmooth analysis.

For a magazine article about myself and my research, see REACH Magazine Rising Star in AI, 2022

University of Washington, Lehigh University, University of Waterloo, McGill University, and MIlA have strong optimization groups which spans across many departments: Math, Stats, CSE, EE, and ISE. If you are interested in optimization talks at these places, check out the following seminars:

EMAIL: yumiko88(at)uw(dot)edu or yumiko88(at)u(dot)washington(dot)edu or courtney(dot)paquette(at)mcgill(dot)ca

OFFICE: BURN 913

RESEARCH

My research interests lie at the frontier of large-scale continuous optimization. Nonconvexity, nonsmooth analysis, complexity bounds, and interactions with random matrix theory and high-dimensional statistics appear throughout work. Modern applications of machine learning demand these advanced tools and motivate me to develop theoretical guarantees with an eye towards immediate practical value. My current research program is concerned with developing a coherent mathematical framework for analyzing average-case (typical) complexity and exact dynamics of learning algorithms in the high-dimensional setting.

You can view my CV here if you are interested in more details.

You can view my thesis titled: Structure and complexity in non-convex and nonsmooth optimization.

RESEARCH PAPERS

* student author

  • C. Paquette, E. Paquette, B. Adlam, J. Pennington Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions. (accepted to NeurIPS 2022), 2022, arXiv pdf
  • K. Lee*, A.N. Cheng*, E.Paquette, C. Paquette. Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High-Dimensions. (accepted to NeurIPS 2022), 2022, arXiv pdf
  • C. Paquette, E. Paquette, B. Adlam, J. Pennington Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties. (submitted), 2022, arXiv pdf
  • L. Cunha*, G. Gidal, F. Pedregosa, C. Paquette, D.Scieur. Only Tails Matter: Average-case Universality and Robustness in the Convex Regime. Proceedings of the 39th International Conference on Machine Learning (ICML) (2022) no. 162, 4474-4491, pdf
  • C. Paquette and E. Paquette. Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models. Advances in Neural Information Processing Systems (NeurIPS), volume 34, 2021, pdf
  • C. Paquette, K. Lee*, F. Pedregosa, and E. Paquette. SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality. Proceedings of Thirty Fourth Conference on Learning Theory (COLT) (2021) no. 134, 3548-3626, pdf
  • C. Paquette, B. van Merrienboer, F. Pedregosa, and E. Paquette. Halting time is predictable for large models: A Universality Property and Average-case Analysis. (2020) (to appear in Found. Comput. Math.), arXiv pdf
  • S. Baghal, C. Paquette, and SA Vavasis. A termination criterion for stochastic gradient for binary classification. (2020) (submitted), arXiv pdf
  • C. Paquette and S. Vavasis. Potential-based analyses of first-order methods for constrained and composite optimization. (2019) (submitted), arXiv pdf
  • C. Paquette and K. Scheinberg. A stochastic line-search method with convergence rate. SIAM J. Optim. (30) (2020) no. 1, 349-376, doi:10.1137/18M1216250, arXiv pdf
  • D. Davis, D. Drusvyatskiy, K. MacPhee, and C. Paquette. Subgradient methods for sharp weakly convex functions. J. Optim. Theory Appl. (179) (2018) no. 3, 962-982, doi:10.1007/s10957-018-1372-8, arXiv pdf
  • D. Davis, D. Drusvyatskiy, and C. Paquette. The nonsmooth landscape of phase retrieval. IMA J. Numer. Anal. (40) (2020) no.4, 2652-2695, doi:10.1093/imanum/drz031, arXiv pdf
  • C. Paquette, H. Lin, D. Drusvyatskiy, J. Mairal, and Z. Harchaoui. Acceleration for Gradient-Based Non-Convex Optimization. 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2018), arXiv pdf
  • D. Drusvyatskiy and C. Paquette. Efficiency of minimizing compositions of convex functions and smooth maps. Math. Program. 178 (2019), no. 1-2, Ser. A, 503-558, doi:10.1007/s10107-018-1311-3, arXiv pdf
  • D. Drusvyatskiy and C. Paquette. Variational analysis of spectral functions simplified. J. Convex Anal. 25(1), 2018. arXiv pdf

EXPOSITORY WRITING

Survey papers based on research projects.

  • C. Paquette and E. Paquette. High-dimensional Optimization. (submitted), 2022, pdf
Practice meets theory; predicting performance of SGD on CIFAR-10 data

Exact dynamics of SGD and concentration effects

High-dimensional Analysis of Optimization Algorithms

Concentration of halting times

Random Matrix Theory & Machine Learning

Eigenvalues of covariance matrix of MNIST data set using random features

Stochastic Optimization

SGD + momentum parameter choices

Nonsmooth & Nonconvex

Convex composite (nonsmooth, nonconvex) of robust phase retrieval

PRESENTATIONS

I have given talks on the research above at the following conferences.

COLLOQUIUM/PLENARY SPEAKER

  • Plenary speaker, Conference on the Mathematical Theory of Deep Neural Networks , Deep Math, UC San Diego, CA, November 2022, upcoming
  • Information Systems Laboratory Colloquium, Stanford University, October 2022, upcoming
  • Plenary speaker, GroundedML Workshop, 10th International Conference on Learning Representations (ICLR), (virtual event), April 2022
  • Courant Institute of Mathematical Sciences Colloquium, New York University (NYU), New York, NY (virtual event), January 2022
  • Mathematics Department Colloquium, University of California-Davis (UC-Davis), Davis, CA (virtual event), January 2022
  • Operations Research and Financial Engineering Colloquium, Princeton University, Princeton, NJ (virtual event), January 2022
  • Computational and Applied Mathematics (CAAM) Colloquium, Rice University, Houston, TX, December 2021
  • Plenary speaker, Beyond first-order methods in machine learning systems Workshop, International Conference on Machine Learning (ICML), (virtual event), July 2021
  • Operations Research Center Seminar, Sloan School of Management, Massachusetts Institute of Technology (MIT), Boston, MA, February 2021
  • Operations Research and Information Engineering (ORIE) Colloquium, Cornell University, Ithaca, NY (virtual event), February 2021

  • Tutte Colloquium, Combinatorics and Optimization Department, University of Waterloo, Waterloo, ON (virtual event), June 2020
  • Center for Artificial Intelligence Design (CAIDA) (colloquium) , University of British Columbia (UBC), Vancouver, BC (virtual event), June 2020
  • Math Colloquium, Ohio State University, Columbus, OH, February 2019
  • Applied Math Colloquium, Brown University, Providence, RI, February 2019
  • Mathematics and Statistics Colloquium, St. Louis University, St. Louis, MO, November 2019,

INVITED TALKS

  • Department of Decision Sciences Seminar, HEC, Montreal, QC, December 2022, upcoming
  • Dynamical Systems Seminar, Brown University, Providence, RI, October 2022, upcoming
  • Tea Talk, Quebec Artificial Intelligence Institute (MILA), Montreal, QC, September 2022
  • Adrian Lewis’ 60th Birthday Conference (contributed talk), University of Washington, Seattle, WA, August 2022
  • Stochastic Optimization Session (contributed talk), International Conference on Continuous Optimization (ICCOPT 2022), Lehigh University, Bethlehem, PA, July 2022
  • Conference on random matrix theory and numerical linear algebra (contributed talk), University of Washington, Seattle, WA, June 2022
  • Dynamics of Learning and Optimization in Brains and Machines, UNIQUE Student Symposium, MILA, Montreal, QC, June 2022
  • The Mathematics of Machine Learning, Women and Mathematics, Institute of Advanced Study, Princeton, NJ, May 2022
  • Robustness and Resilience in Stochastic Optimization and Statistical Learning: Mathematical Foundations, Ettore Majorana Foundation and Centre for Scientific Culture, Erice, Italy, May 2022
  • Optimization in Data Science (contributed talk), INFORMS Optimization Society Meeting 2022, Greenville, SC, March 2022
  • Optimization and ML Workshop (contributed talk), Canadian Mathematical Society (CMS), Montreal, QC, December 2021
  • Operations Research /Optimization Seminar, UBC-Okanagan and Simon Fraser University, Burnaby, BC, December 2021
  • Machine Learning Advances and Applications Seminar, Fields Institute for Research in Mathematical Sciences, Toronto, ON, November 2021
  • Methods for Large-Scale, Nonlinear Stochastic Optimization Session (contributed talk), SIAM Conference on Optimization, Spokane, WA, July 2021
  • MILA TechAide AI Conference (invited talk), Montreal, QC, May 2021
  • Minisymposium on Random matrices and numerical linear algebra (contributed talk), SIAM Conference on Applied Linear Algebra,, virtual event May 2021
  • Numerical Analysis Seminar (invited talk), Applied Mathematics, University of Washington, Seattle, WA, April 2021
  • Applied Mathematics Seminar (invited talk), Applied Mathematics, McGill University, Montreal, QC, January 2021
  • Optimization and ML Workshop (contributed talk), Canadian Mathematical Society (CMS), Montreal, QC, December 2020
  • UW Machine Learning Seminar (invited talk), Paul G. Allen School of Computer Science, University of Washington, Seattle, WA, November 2020
  • Soup and Science (contributed talk), McGill University, Montreal, QC, September 2020
  • Conference on Optimization, Fields Institute for Research in Mathematical Science, Toronto, ON, November 2019
  • Applied Math Seminar, McGill University, Montreal, QC, February 2019
  • Applied Math and Analysis Seminar, Duke University, Durham, NC, January 2019
  • Google Brain Tea Talk, Google, Montreal, QC, January 2019
  • Young Researcher Workshop, Operations Research and Information Engineering (ORIE), Cornell University, Ithaca, NY, October 2018
  • DIMACS/NSF-TRIPODS conference, Lehigh University, Bethlehem, PA, July 2018
  • Session talk, INFORMS annual meeting, Houston, TX, October 2017
  • Optimization Seminar, Lehigh University, Bethlehem, PA, September 2017
  • Session talk, SIAM-optimization, Vancouver, BC, May 2017
  • Optimization and Statistical Learning, Les Houches, April 2017
  • West Coast Optimization Meeting, University of British Columbia (UBC), Vancouver, BC, September 2016

SUMMER SCHOOLS & TUTORIALS

  • Nonconvex and Nonsmooth Optimization Tutorial, East Coast Optimization Meeting, George Mason University, Fairfax, VA, April 2022
  • Average Case Complexity Tutorial, Workshop on Optimization under Uncertainty, Centre de recherches mathematiques (CRM), Montreal, QC, September 2021
  • Stochastic Optimization, Summer School talk for University of Washington’s ADSI Summer School on Foundations of Data Science, Seattle, WA, August 2019

WORKSHOPS & TUTORIALS

Workshops

I have had the pleasure to organize some wonderful optimization workshops. Please consider submitting papers to these great organizations.

  • Optimization for Machine Learning Workshop part of NeurIPS
  • • Program Chair (2020,2021,2022) • Annual event in early December, late November • Website: https://opt-ml.org/ • Accepts papers starting in July (see website for details)

  • Montreal AI Symposium
    • Program Chair (2021) • Annual event in early September-October • Website: http://montrealaisymposium.com/ • Accepts papers starting in June (see website for details); Must be connected to the greater Montreal area

Tutorials

I have organized the following tutorials based on my research. For more information, please see the corresponding website.

Eigenvalues of Wishart matrices

Average-case analysis

TEACHING

Current Course

  • Math 417/517 Linear Optimization/Honors Linear Optimization, Fall 2022, Website

Past Courses

I have taught the following courses:

    McGill University, Mathematics and Statistics Department
  • Math 560 (graduate, instructor): Numerical Optimization, Winter 2021, Winter 2022
  • Math 315 (undergraduate, instructor): Ordinary Differential Equations, Fall 2020, Fall 2021
  • Math 597 (graduate, instructor): Topics course on Convex Analysis and Optimization, Fall 2021
    Lehigh University, Industrial and Systems Engineering
  • ISE 417 (graduate, instructor): Nonlinear Optimization, Spring 2018
    University of Washington, Mathematics Department
  • Math 125 BC/BD (undergraduate, TA): Calculus II Quiz Section, Winter 2017
  • Math 307 E (undergraduate, instructor): Intro to Differential Equations, Winter 2016
  • Math 124 CC (undergraduate, TA): Calculus 1, Autumn 2015
  • Math 307 I (undergraduate, instructor): Intro to Differential Equations, Spring 2015
  • Math 125 BA/BC (undergraduate, TA): Calculus 2, Winter 2015
  • Math 307 K (undergraduate, instructor): Intro to Differential Equations, Autumn 2014
  • Math 307 L (undergraduate, instructor): Intro to Differential Equations, Spring 2014

Biosketch (for talks)

Courtney Paquette is an assistant professor at McGill University and a CIFAR Canada AI chair, MILA. Paquette’s research broadly focuses on designing and analyzing algorithms for large-scale optimization problems, motivated by applications in data science. She received her PhD from the mathematics department at the University of Washington (2017), held postdoctoral positions at Lehigh University (2017-2018) and University of Waterloo (NSF postdoctoral fellowship, 2018-2019), and was a research scientist at Google Research, Brain Montreal (2019-2020).

Research currently supported by CIFAR AI Chair, MILA; NSERC Discovery Grant; FRQNT New university researcher’s start-up program

McGill University:
Random Matrix Theory & Machine Learning & Optimization Graduate Seminar (RMT+ML+OPT Seminar)

Current Information, Fall 2023

All are welcome to attend (in person) at McGill University.
For a complete schedule, see Website

The goal of the seminar is to give graduate and undergraduate students the opportunity to learn how to present technical papers in machine learning, random matrix theory, and optimization.