Complexity of In-Context Concept Learning in Language Models

dc.contributor.advisorSteinert-Threlkeld, Shane
dc.contributor.authorWang, Leroy
dc.date.accessioned2025-08-01T22:26:08Z
dc.date.available2025-08-01T22:26:08Z
dc.date.issued2025-08-01
dc.date.submitted2025
dc.descriptionThesis (Master's)--University of Washington, 2025
dc.description.abstractThis thesis studies the factors that contribute to the success and shortcomings of in-contextlearning, which refers to the ability of some language models to perform a new task during inference using only a few labeled examples, for Large Language Models (LLMs). Drawing on insights from the literature on human concept learning, we test LLMs on carefully designed concept learning tasks, and show that task performance highly correlates with the logical complexity of the concept. This suggests that in-context learning exhibits a learning bias for simplicity in a way similar to humans.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherWang_washington_0250O_28567.pdf
dc.identifier.urihttps://hdl.handle.net/1773/53676
dc.language.isoen_US
dc.rightsnone
dc.subjectCognitive science
dc.subjectLinguistics
dc.subjectNLP
dc.subjectLinguistics
dc.subjectComputer science
dc.subject.otherLinguistics
dc.titleComplexity of In-Context Concept Learning in Language Models
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Wang_washington_0250O_28567.pdf
Size:
760.68 KB
Format:
Adobe Portable Document Format

Collections