Dimensionality Reduction for Supervised and Unsupervised Learning: New Algorithms, Analysis and Application

Loading...
Thumbnail Image

Authors

Liu, Hexuan

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Dimensionality reduction is an essential topic in data science, particularly when data are high-dimensional or have more features than samples. The process of reducing the data dimension usually involves solving an eigenvalue problem. For example, the ubiquitously used principal component analysis obtains the principal subspace by solving a standard eigenvalue problem, and linear discriminant analysis obtains a discriminative subspace by solving a generalized eigenvalue problem. A vast array of real-world data problems can be framed mathematically as variants of eigenvalue problems, including eigenvalue problems with sparsity constraints and penalties, and nonlinear eigenvalue problems. In this thesis, I present new formulations of penalized and constrained eigenvalue problems for dimensionality reduction, propose new provably convergent algorithms to solve these formulations, and present real-world applications spanning many fields, including examples from both supervised and unsupervised learning.

Description

Thesis (Ph.D.)--University of Washington, 2022

Citation

DOI