Local and Global Convergence for Convex-Composite Optimization

Loading...
Thumbnail Image

Authors

Engle, Abraham

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Convex-composite optimization seeks to minimize f(x):=h(c(x)) over x in R^n, where h is closed, proper, and convex, and c is smooth. Such problems include nonlinear programming, mini-max optimization, estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Almost all methods for solving this problem involve direction finding subproblems based on linearizing the smooth function c at some current iterate. When h is the identity function on the real line, these direction finding subproblems correspond to steepest descent, prox-gradient descent, Newton's method, or quasi-Newton methods. When h is infinite-valued piecewise linear convex, the subproblems are quadratic programs, one class of which corresponds to sequential quadratic programming of nonlinear programming. This thesis is divided into two parts. The first part is devoted to globalization strategies including line search and trust region methods. The second part is devoted to local analysis in the case where h is piecewise linear-quadratic convex, where the subproblems correspond to a Newton-like algorithm for an associated generalized equation describing the optimality conditions.

Description

Thesis (Ph.D.)--University of Washington, 2018

Citation

DOI

Collections