Machine Learning in Adversarial Settings: Attacks and Defenses

dc.contributor.advisorPoovendran, Radha
dc.contributor.authorHosseini, Hossein
dc.date.accessioned2019-10-15T22:53:48Z
dc.date.available2019-10-15T22:53:48Z
dc.date.issued2019-10-15
dc.date.submitted2019
dc.descriptionThesis (Ph.D.)--University of Washington, 2019
dc.description.abstractDeep neural networks have achieved remarkable success over the last decade in a variety of tasks. Such models are, however, typically designed and developed with the implicit assumption that they will be deployed in benign settings. With the increasing use of learning systems in security-sensitive and safety-critical application, such as banking, medical diagnosis, and autonomous cars, it is important to study and evaluate their performance in adversarial settings. The security of machine learning systems has been studied from different perspectives. Learning models are subject to attacks at both training and test phases. The main threat at test time is evasion attack, in which the attacker subtly modifies input data such that a human observer would perceive the original content, but the model generates different outputs. Such inputs, known as adversarial examples, has been used to attack voice interfaces, face-recognition systems and text classifiers. The goal of this dissertation is to investigate the test-time vulnerabilities of machine learning systems in adversarial settings and develop robust defensive mechanisms. The dissertation covers two classes of models, 1) commercial ML products developed by Google, namely Perspective, Cloud Vision, and Cloud Video Intelligence APIs, and 2) state-of-the-art image classification algorithms. In both cases, we propose novel test-time attack algorithms and also present defense methods against such attacks.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherHosseini_washington_0250E_20625.pdf
dc.identifier.urihttp://hdl.handle.net/1773/44673
dc.language.isoen_US
dc.rightsnone
dc.subjectAdversarial Learning
dc.subjectMachine Learning
dc.subjectSecurity
dc.subjectComputer science
dc.subject.otherElectrical engineering
dc.titleMachine Learning in Adversarial Settings: Attacks and Defenses
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hosseini_washington_0250E_20625.pdf
Size:
7.81 MB
Format:
Adobe Portable Document Format