Hajishirzi, HannanehZettlemoyer, LukeMin, Sewon2024-09-092024-09-092024-09-092024Min_washington_0250E_27058.pdfhttps://hdl.handle.net/1773/51864Thesis (Ph.D.)--University of Washington, 2024Large language models (LMs) such as ChatGPT have revolutionized natural language processing and artificial intelligence more broadly. In this thesis, I discuss my research on understanding and advancing these models, centered around how they use the very large text corpora they are trained on. First, I describe our efforts to understand how these models learn to perform new tasks after training, demonstrating that their so-called in context learning capabilities are almost entirely determined by what they learn from the training data. Next, I introduce a new class of LMs—nonparametric LMs—that repurpose this training data as a data store from which they retrieve information for improved accuracy and updatability. I describe my work on establishing the foundations of such models, including one of the first broadly used neural retrieval models and an approach that simplifies a traditional, two-stage pipeline into one. I also discuss how nonparametric models open up new avenues for responsible data use, e.g., by segregating permissive and copyrighted text and using them differently. Finally, I envision the next generation of LMs we should build, focusing on efficient scaling, improved factuality, and decentralization.application/pdfen-USCC BY-SAComputer scienceComputer science and engineeringRethinking Data Use in Large Language ModelsThesis