Bilmes, Jeffery JBShah, Chirag CSVerma, Sahil2026-02-052026-02-052026-02-052025Verma_washington_0250E_29142.pdfhttps://hdl.handle.net/1773/55197Thesis (Ph.D.)--University of Washington, 2025Recent advancements in ML have taken strides in enabling models to accomplish unprecedented tasks, starting from the bare minimum binary classification for loan applications to intrinsically complex self-driving. As the models have become better, faster, and more powerful, they have also become larger and more opaque. This has happened because of the widespread use of neural networks, which enable capturing and expressing incredibly complex representations but are uninterpretable to humans. This phenomenon raises the question of trust -- as humans who want to be in the position of control, how do we trust the model to make correct decisions? In this thesis, I aim to answer this question by making models more interpretable, examining their robustness, and ensuring they are safe for us as a society to rely on.application/pdfen-USnoneArtificial IntelligenceMachine LearningTrustworthy MLArtificial intelligenceComputer scienceComputer engineeringComputer science and engineeringTowards Interpretable and Robust ML SystemsThesis