Matrix free methods for large scale optimization

dc.contributor.advisorBurke, James Ven_US
dc.contributor.authorWang, Jiashanen_US
dc.date.accessioned2015-09-29T21:24:48Z
dc.date.available2015-09-29T21:24:48Z
dc.date.issued2015-09-29
dc.date.submitted2015en_US
dc.descriptionThesis (Ph.D.)--University of Washington, 2015en_US
dc.description.abstractSequential quadratic optimization (SQP) methods are widely used to solve large-scale nonlinear optimization problems. We build two matrix-free methods for approximately solving exact penalty subproblems that arise when using SQP methods to solve large-scale optimization problems. The first approach is a novel iterative re-weighting algorithm. The second approach is based on alternating direction augmented Lagrangian technology applied to our setting. We prove that both algorithms are globally convergent under loose assumptions. SQP methods can be plagued by poor behavior of the global convergence mechanisms. Here we consider global convergence results that use an exact penalty function to compute step sizes. To confront this issue, we propose a dynamic penalty parameter updating strategy to be employed within the subproblem solver in such a way that the resulting search direction predicts progress toward both feasibility and optimality. We prove that does not decrease the penalty parameter unnecessarily in the neighborhood of points satisfying certain common assumptions. We also discuss a coordinate descent subproblem solver in which our updating strategy can be readily incorporated. In the final application of the thesis, we consider a block coordinate descent (BCD) method applied to graphical model learning with special structures, in particular, hub structure and latent variable selection. We tackle the issue of maintaining the positive definiteness of covariance matrices for general rank 2 updates. An active set strategy is employed to speed up BCD for hub structure problem. For latent variable selection problems, we propose a method for maintaining a low rank factorization for the covariance matrix while preserving the convexity of the subproblems for SBCD. We show that our proposed method converges to a stationary point of a non-convex formulation. Extensive numerical experiments are discussed for both models.en_US
dc.embargo.termsOpen Accessen_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.otherWang_washington_0250E_15022.pdfen_US
dc.identifier.urihttp://hdl.handle.net/1773/34028
dc.language.isoen_USen_US
dc.rightsCopyright is held by the individual authors.en_US
dc.subject.otherMathematicsen_US
dc.subject.othermathematicsen_US
dc.titleMatrix free methods for large scale optimizationen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Wang_washington_0250E_15022.pdf
Size:
881.28 KB
Format:
Adobe Portable Document Format

Collections