Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

On Averaging and Extrapolation for Gradient Descent

Published in ArXiv Preprint, 2024

This work considers the effect of averaging and extrapolation of the iterates of gradient descent in smooth convex optimization. We show that for several common stepsize sequences, averaging cannot improve gradient descent’s worst-case performance. In contrast, we prove a conceptually simple and computationally cheap extrapolation scheme strictly improves the worst-case convergence rate: when initialized at the origin, reporting \((1+1/\sqrt{16N\log(N)})x_N\) rather than \(x_N\) improves the best possible worst-case performance by the same amount as conducting \(O(\sqrt{N/\log(N)})\) more gradient steps.

Recommended citation: Alan Luner, Benjamin Grimmer. (2024). "On Averaging and Extrapolation for Gradient Descent." ArXiv Preprint 2402.12493.
Download Paper

Performance Estimation for Smooth and Strongly Convex Sets

Published in ArXiv Preprint, 2024

This work extends recent computer-assisted design and analysis techniques for first-order optimization over structured functions–known as performance estimation–to apply to structured sets. We prove “interpolation theorems” for smooth and strongly convex sets with Slater points and bounded diameter, showing a wide range of extremal questions amount to structured mathematical programs. Our theory provides finite-dimensional formulations of performance estimation problems for algorithms utilizing separating hyperplane oracles, linear optimization oracles, and/or projection oracles of smooth/strongly convex sets, and we demonstrate its applications.

Recommended citation: Alan Luner, Benjamin Grimmer. (2024). "Performance Estimation for Smooth and Strongly Convex Sets." ArXiv Preprint 2410.14811 .
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.