Aravkin, AleksandrGreenbaum, AnneLiu, Hexuan2022-07-142022-07-142022Liu_washington_0250E_24219.pdfhttp://hdl.handle.net/1773/48810Thesis (Ph.D.)--University of Washington, 2022Dimensionality reduction is an essential topic in data science, particularly when data are high-dimensional or have more features than samples. The process of reducing the data dimension usually involves solving an eigenvalue problem. For example, the ubiquitously used principal component analysis obtains the principal subspace by solving a standard eigenvalue problem, and linear discriminant analysis obtains a discriminative subspace by solving a generalized eigenvalue problem. A vast array of real-world data problems can be framed mathematically as variants of eigenvalue problems, including eigenvalue problems with sparsity constraints and penalties, and nonlinear eigenvalue problems. In this thesis, I present new formulations of penalized and constrained eigenvalue problems for dimensionality reduction, propose new provably convergent algorithms to solve these formulations, and present real-world applications spanning many fields, including examples from both supervised and unsupervised learning.application/pdfen-USnoneDimensionality ReductionEigenvalue ProblemMachine LearningNumerical AnalysisOptimizationSparsityApplied mathematicsMathematicsComputer scienceApplied mathematicsDimensionality Reduction for Supervised and Unsupervised Learning: New Algorithms, Analysis and ApplicationThesis