LSTAR Framework: Lightweight Framework for Standardizing Tests for Adversarial Robustness

dc.contributor.advisorLagesse, Brent
dc.contributor.authorTran, Kenneth
dc.date.accessioned2024-09-09T22:59:21Z
dc.date.available2024-09-09T22:59:21Z
dc.date.issued2024-09-09
dc.date.submitted2024
dc.descriptionThesis (Master's)--University of Washington, 2024
dc.description.abstractThe role of neural networks in various tasks has exploded in recent years, becoming prevalentin many safety-critical applications. However, improving neural network robustness has be- come a challenge due to the existence of adversarial examples—imperceptible perturbations to the inputs of machine learning models that mislead classifiers into producing incorrect outputs. While there have been numerous advancements in crafting adversarial attacks and defenses, research on the basis of adversarial examples has notably lagged behind, largely due to the computational difficulty of analyzing high-dimensional spaces. This inherent difficulty has led researchers to construct models for understanding adversarial examples divergent from conventional paradigms, with some relying on commonly used frameworks while others utilize their own tailored frameworks to meet their unique needs. Consequently, replicating and building upon research in this field presents a significant challenge. In this paper, we present a modular, lightweight framework to assist researchers in ad-dressing these challenges by providing a comprehensive approach to evaluating machine learning models through a standardized experimentation platform. We present several po- tential hypotheses regarding the basis of adversarial examples and utilize our framework to verify them more robustly under complex attacks and datasets through controlled experi- ments. Our experimental results indicate that geometric causes directly affect the robustness of machine learning models, while statistical factors amplify the effects of adversarial attacks. This framework provides a baseline for further studies to better understand the phenomenon of adversarial examples, allowing researchers to design more robust machine learning models.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherTran_washington_0250O_26699.pdf
dc.identifier.urihttps://hdl.handle.net/1773/51652
dc.language.isoen_US
dc.rightsCC BY
dc.subjectAdversarial attacks
dc.subjectAdversarial machine learning
dc.subjectCuPy
dc.subjectDeep learning
dc.subjectNeural networks
dc.subjectResearch standardization
dc.subjectComputer science
dc.subject.otherComputing and software systems
dc.titleLSTAR Framework: Lightweight Framework for Standardizing Tests for Adversarial Robustness
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Tran_washington_0250O_26699.pdf
Size:
6.55 MB
Format:
Adobe Portable Document Format