Approaches to Epistemic Risk in Generative and General-Purpose AI

dc.contributor.advisorHowe, Bill
dc.contributor.advisorHiniker, Alexis
dc.contributor.authorWolfe, Robert
dc.date.accessioned2025-08-01T22:25:36Z
dc.date.available2025-08-01T22:25:36Z
dc.date.issued2025-08-01
dc.date.submitted2025
dc.descriptionThesis (Ph.D.)--University of Washington, 2025
dc.description.abstractGenerative and general-purpose AI systems stand poised to reshape longstanding information infrastructures and professions, ranging from search to social media to online journalism. Yet questions surrounding subtle biases, misinforming output, and system reliability and transparency – epistemic risks related to the way knowledge is encoded and disseminated – have followed these technologies since their inception. Without strategies for understanding and managing the risks they pose, general-purpose models may degrade the reliability of the information ecosystem, as well as introduce hazards for the individuals and institutions deploying them. This dissertation introduces methods to understand epistemic risks in generative and general-purpose AI and approaches to responsibly deploy these systems in the presence of inevitable epistemic risk. Concretely, this dissertation develops three approaches to epistemic risk in generative and general-purpose AI. First, I introduce computational approaches to identifying both the manifestations of epistemic risks like bias and misrepresentation and their underlying causes, such as the scale of a model’s pretraining dataset and the unanticipated biases present in high-quality media data such as online newspaper articles. Second, I introduce novel design frameworks that account for epistemic risk in generative models, taking into account the need for information integrity among organizations engaged in data-driven knowledge work, as well as among users in interpersonal communication online. Finally, I introduce transparency-maximizing approaches to mitigate the heightened epistemic risk of using generative models served over black-box APIs, including an approach that customizes small open models on consumer-grade GPUs, as well as a context-sensitive approach to the adoption of open and proprietary models that accounts for the needs of organizations engaged in human-centered data science work. Taken together, these approaches point toward a future for generative and general-purpose AI that values reliability and information integrity.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherWolfe_washington_0250E_28021.pdf
dc.identifier.urihttps://hdl.handle.net/1773/53668
dc.language.isoen_US
dc.relation.haspartRobert Wolfe - Dissertation - Code.zip; other; .
dc.rightsCC BY-NC-SA
dc.subjectAI and Society
dc.subjectAI Reliability
dc.subjectAI Transparency
dc.subjectEpistemic Risk
dc.subjectGeneral-Purpose AI
dc.subjectGenerative AI
dc.subjectInformation science
dc.subject.otherInformation science
dc.titleApproaches to Epistemic Risk in Generative and General-Purpose AI
dc.typeThesis

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Wolfe_washington_0250E_28021.pdf
Size:
14.81 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Robert Wolfe - Dissertation - Code.zip
Size:
533.44 MB
Format:
Unknown data format