Burke, James V.Engle, Abraham2019-02-222019-02-222018Engle_washington_0250E_19485.pdfhttp://hdl.handle.net/1773/43411Thesis (Ph.D.)--University of Washington, 2018Convex-composite optimization seeks to minimize f(x):=h(c(x)) over x in R^n, where h is closed, proper, and convex, and c is smooth. Such problems include nonlinear programming, mini-max optimization, estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Almost all methods for solving this problem involve direction finding subproblems based on linearizing the smooth function c at some current iterate. When h is the identity function on the real line, these direction finding subproblems correspond to steepest descent, prox-gradient descent, Newton's method, or quasi-Newton methods. When h is infinite-valued piecewise linear convex, the subproblems are quadratic programs, one class of which corresponds to sequential quadratic programming of nonlinear programming. This thesis is divided into two parts. The first part is devoted to globalization strategies including line search and trust region methods. The second part is devoted to local analysis in the case where h is piecewise linear-quadratic convex, where the subproblems correspond to a Newton-like algorithm for an associated generalized equation describing the optimality conditions.application/pdfen-USCC BYMathematical ProgrammingOperations ResearchOptimizationMathematicsOperations researchMathematicsLocal and Global Convergence for Convex-Composite OptimizationThesis