Towards Better Generalization: Model, Data, and Explicit Knowledge

Loading...
Thumbnail Image

Authors

Bagherinezhad, Hessam

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In this dissertation, I explore three ways to make models more generalizable. 1) Through explicit knowledge extraction. Explicit knowledge enables models to correct their predictions, and in some cases to break a complex task into smaller pieces where each can be trained with less amount of data. 2) Through reducing model complexity. It is known that over- parameterized complex Convolutional Neural Networks (CNNs) often overfit to the given training set, and are therefore less generalizable. In this dissertation, I explore redesigning convolutional layers that outperform standard CNNs under few shot training scenario. 3) Through making labels more informative. I study the current data labeling paradigm, and present how labels for a simple image classification task are noisy. Noisy labels contribute to less generalizability. This is due to the fact that our over-parameterized models overfit to the noisy signal that is specific to that training set; therefore, they act poorly on an unseen test set. For explicit knowledge extraction, I first explore estimating and modeling Newtonian physics of a scene, and then explore extracting information about sizes of objects without any supervision required. For reducing model complexity, I explore redesigning Convolutional layers to reduce their complexity by sharing a dictionary of vectors among different convolutions. For label noise reduction, I explore making the training more accurate by refining the labels of a dataset with a dynamic label generator, called Label Refinery.

Description

Thesis (Ph.D.)--University of Washington, 2020

Citation

DOI