Steinert-Threlkeld, ShaneWang, Leroy2025-08-012025-08-012025-08-012025Wang_washington_0250O_28567.pdfhttps://hdl.handle.net/1773/53676Thesis (Master's)--University of Washington, 2025This thesis studies the factors that contribute to the success and shortcomings of in-contextlearning, which refers to the ability of some language models to perform a new task during inference using only a few labeled examples, for Large Language Models (LLMs). Drawing on insights from the literature on human concept learning, we test LLMs on carefully designed concept learning tasks, and show that task performance highly correlates with the logical complexity of the concept. This suggests that in-context learning exhibits a learning bias for simplicity in a way similar to humans.application/pdfen-USnoneCognitive scienceLinguisticsNLPLinguisticsComputer scienceLinguisticsComplexity of In-Context Concept Learning in Language ModelsThesis