Towards Interpretable and Robust ML Systems
| dc.contributor.advisor | Bilmes, Jeffery JB | |
| dc.contributor.advisor | Shah, Chirag CS | |
| dc.contributor.author | Verma, Sahil | |
| dc.date.accessioned | 2026-02-05T19:34:23Z | |
| dc.date.available | 2026-02-05T19:34:23Z | |
| dc.date.issued | 2026-02-05 | |
| dc.date.submitted | 2025 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2025 | |
| dc.description.abstract | Recent advancements in ML have taken strides in enabling models to accomplish unprecedented tasks, starting from the bare minimum binary classification for loan applications to intrinsically complex self-driving. As the models have become better, faster, and more powerful, they have also become larger and more opaque. This has happened because of the widespread use of neural networks, which enable capturing and expressing incredibly complex representations but are uninterpretable to humans. This phenomenon raises the question of trust -- as humans who want to be in the position of control, how do we trust the model to make correct decisions? In this thesis, I aim to answer this question by making models more interpretable, examining their robustness, and ensuring they are safe for us as a society to rely on. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Verma_washington_0250E_29142.pdf | |
| dc.identifier.uri | https://hdl.handle.net/1773/55197 | |
| dc.language.iso | en_US | |
| dc.rights | none | |
| dc.subject | Artificial Intelligence | |
| dc.subject | Machine Learning | |
| dc.subject | Trustworthy ML | |
| dc.subject | Artificial intelligence | |
| dc.subject | Computer science | |
| dc.subject | Computer engineering | |
| dc.subject.other | Computer science and engineering | |
| dc.title | Towards Interpretable and Robust ML Systems | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Verma_washington_0250E_29142.pdf
- Size:
- 30.96 MB
- Format:
- Adobe Portable Document Format
