Adversarial Example Resistant Hyperparameters and Deep Learning Networks

dc.contributor.advisorLagesse, Brent
dc.contributor.authorHulderson, Eric Joseph
dc.date.accessioned2022-01-26T23:19:49Z
dc.date.available2022-01-26T23:19:49Z
dc.date.issued2022-01-26
dc.date.submitted2021
dc.descriptionThesis (Master's)--University of Washington, 2021
dc.description.abstractCarefully crafted input has been shown to cause misclassifications in machine learning based classification systems resulting in the phenomenon of adversarial examples. Hyperparameters, the settings used to build and train machine learning models, have been shown to build machine learning models that are more resistant to adversarial examples. In this paper, we expand the research of hyperparameter saliency and incorporate deep learning architectures to compliment the field of research in addition to exploring the relationships between adversarial resistance and accuracy as well as depth. We find that hidden layer structures as well as activation function are important to resistance of adversarial perturbations, network depth provides for more robustness with some attacks while architecture influences robustness against others, and salient hyperparameter impact on accuracy is complex.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherHulderson_washington_0250O_23806.pdf
dc.identifier.urihttp://hdl.handle.net/1773/48151
dc.language.isoen_US
dc.rightsnone
dc.subjectadversarial examples
dc.subjecthyperparameters
dc.subjectmachine learning
dc.subjectneural networks
dc.subjectComputer science
dc.subject.otherComputing and software systems
dc.titleAdversarial Example Resistant Hyperparameters and Deep Learning Networks
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hulderson_washington_0250O_23806.pdf
Size:
1.69 MB
Format:
Adobe Portable Document Format