Matrix free methods for large scale optimization
MetadataShow full item record
Sequential quadratic optimization (SQP) methods are widely used to solve large-scale nonlinear optimization problems. We build two matrix-free methods for approximately solving exact penalty subproblems that arise when using SQP methods to solve large-scale optimization problems. The first approach is a novel iterative re-weighting algorithm. The second approach is based on alternating direction augmented Lagrangian technology applied to our setting. We prove that both algorithms are globally convergent under loose assumptions. SQP methods can be plagued by poor behavior of the global convergence mechanisms. Here we consider global convergence results that use an exact penalty function to compute step sizes. To confront this issue, we propose a dynamic penalty parameter updating strategy to be employed within the subproblem solver in such a way that the resulting search direction predicts progress toward both feasibility and optimality. We prove that does not decrease the penalty parameter unnecessarily in the neighborhood of points satisfying certain common assumptions. We also discuss a coordinate descent subproblem solver in which our updating strategy can be readily incorporated. In the final application of the thesis, we consider a block coordinate descent (BCD) method applied to graphical model learning with special structures, in particular, hub structure and latent variable selection. We tackle the issue of maintaining the positive definiteness of covariance matrices for general rank 2 updates. An active set strategy is employed to speed up BCD for hub structure problem. For latent variable selection problems, we propose a method for maintaining a low rank factorization for the covariance matrix while preserving the convexity of the subproblems for SBCD. We show that our proposed method converges to a stationary point of a non-convex formulation. Extensive numerical experiments are discussed for both models.
- Mathematics