Complexity of In-Context Concept Learning in Language Models
| dc.contributor.advisor | Steinert-Threlkeld, Shane | |
| dc.contributor.author | Wang, Leroy | |
| dc.date.accessioned | 2025-08-01T22:26:08Z | |
| dc.date.available | 2025-08-01T22:26:08Z | |
| dc.date.issued | 2025-08-01 | |
| dc.date.submitted | 2025 | |
| dc.description | Thesis (Master's)--University of Washington, 2025 | |
| dc.description.abstract | This thesis studies the factors that contribute to the success and shortcomings of in-contextlearning, which refers to the ability of some language models to perform a new task during inference using only a few labeled examples, for Large Language Models (LLMs). Drawing on insights from the literature on human concept learning, we test LLMs on carefully designed concept learning tasks, and show that task performance highly correlates with the logical complexity of the concept. This suggests that in-context learning exhibits a learning bias for simplicity in a way similar to humans. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Wang_washington_0250O_28567.pdf | |
| dc.identifier.uri | https://hdl.handle.net/1773/53676 | |
| dc.language.iso | en_US | |
| dc.rights | none | |
| dc.subject | Cognitive science | |
| dc.subject | Linguistics | |
| dc.subject | NLP | |
| dc.subject | Linguistics | |
| dc.subject | Computer science | |
| dc.subject.other | Linguistics | |
| dc.title | Complexity of In-Context Concept Learning in Language Models | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Wang_washington_0250O_28567.pdf
- Size:
- 760.68 KB
- Format:
- Adobe Portable Document Format
