Supervised Dimensionality Reduction for Different Learning Architectures
MetadataShow full item record
As datasets become larger and larger, there is a need for algorithms that can efficiently extract the relevant information for a given task and represent it in a concise manner. Supervised dimensionality reduction is one approach to doing this, as it reduces the input space of the data while retaining the characteristics of the data that are useful for classification. This dissertation motivates and analyzes a new supervised dimensionality reduction technique called local discriminative Gaussian (LDG) dimensionality reduction. Experiments show that LDG is fast and effective when compared to other state-of-the-art supervised linear dimensionality reduction methods. LDG is shown to also be effective in the small sample size problem, where few training examples are provided in relation to the input dimensionality of the data. LDG is then extended to the transfer learning problem, where the goal is to classify test examples drawn from a target domain distribution using training examples drawn from a source domain distribution that differs from the target domain. Another contribution of this dissertation is an algorithm that reliably classifies incomplete test data. Incomplete test data classification is useful if one wishes to classify a test sample before all of the test data is gathered, for example, if one wishes to make an early decision on time-series data. Experiments show that the proposed algorithm can classify incomplete time-series data while maintaining accuracy that is comparable to that achieved using the complete test data. Furthermore, LDG dimensionality reduction is shown to greatly reduce the computational complexity of the incomplete test data classifier.
- Electrical engineering