M.Sc thesis: Understanding the role of averaging in non-smooth stochastic gradient descent

Published in University of British Columbia open collections, 2020


Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessarily differentiable. We prove that after $T$ steps of stochastic gradient descent (SGD), the error of the final iterate is $O(\log(T)/T)$ \emph{with high probability}. We also construct a function from this class for which the error of the final iterate of \emph{deterministic} gradient descent is $\Omega(\log(T)/T)$. This shows that the upper bound is tight and that, in this setting, the last iterate of stochastic gradient descent has the same general error rate (with high probability) as deterministic gradient descent. This resolves both open questions posed by \citet{ShamirQuestion}.

We prove analogous results for functions which are Lipschitz and convex, but not necessarily strongly convex or differentiable. After $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/\sqrt{T})$ \emph{with high probability}, and there exists a function for which the error of the final iterate of \emph{deterministic} gradient descent is $\Omega(\log(T)/\sqrt{T})$.

In the strongly-convex setting, several forms of SGD, including suffix averaging, are known to achieve the optimal $O(1/T)$ convergence rate \emph{in expectation}. An intermediate step of our high probability analysis for the error of the final iterate proves that the suffix averaging method achieves error $O(1/T)$ \emph{with high probability}, which is optimal (for any first-order optimization method). This improves results of \citet{Rakhlin} and \citet{HK14}, both of which achieved error $O(1/T)$, but only in expectation, and achieved a high probability error bound of $O(\log \log(T)/T)$, which is suboptimal. This is the first known high-probability result which attains the optimal $O(1/T)$ rate.

We also consider a simple, non-uniform averaging strategy of \citet{lacoste2012simpler} and prove that it too achieves the optimal $O(1/T)$ convergence rate \emph{with high probability}. This provides a second algorithm which achieves the optimal $O(1/T)$ convergence rate with high-probability. Our high-probability results are proven using a generalization of Freedman’s Inequality which we develop.

You may view my M.Sc thesis online here or you may download a copy here.