Towards Interpretable and Robust ML Systems

dc.contributor.advisorBilmes, Jeffery JB
dc.contributor.advisorShah, Chirag CS
dc.contributor.authorVerma, Sahil
dc.date.accessioned2026-02-05T19:34:23Z
dc.date.available2026-02-05T19:34:23Z
dc.date.issued2026-02-05
dc.date.submitted2025
dc.descriptionThesis (Ph.D.)--University of Washington, 2025
dc.description.abstractRecent advancements in ML have taken strides in enabling models to accomplish unprecedented tasks, starting from the bare minimum binary classification for loan applications to intrinsically complex self-driving. As the models have become better, faster, and more powerful, they have also become larger and more opaque. This has happened because of the widespread use of neural networks, which enable capturing and expressing incredibly complex representations but are uninterpretable to humans. This phenomenon raises the question of trust -- as humans who want to be in the position of control, how do we trust the model to make correct decisions? In this thesis, I aim to answer this question by making models more interpretable, examining their robustness, and ensuring they are safe for us as a society to rely on.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherVerma_washington_0250E_29142.pdf
dc.identifier.urihttps://hdl.handle.net/1773/55197
dc.language.isoen_US
dc.rightsnone
dc.subjectArtificial Intelligence
dc.subjectMachine Learning
dc.subjectTrustworthy ML
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.subjectComputer engineering
dc.subject.otherComputer science and engineering
dc.titleTowards Interpretable and Robust ML Systems
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Verma_washington_0250E_29142.pdf
Size:
30.96 MB
Format:
Adobe Portable Document Format