Complexity of In-Context Concept Learning in Language Models
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis studies the factors that contribute to the success and shortcomings of in-contextlearning, which refers to the ability of some language models to perform a new task during
inference using only a few labeled examples, for Large Language Models (LLMs). Drawing on
insights from the literature on human concept learning, we test LLMs on carefully designed
concept learning tasks, and show that task performance highly correlates with the logical
complexity of the concept. This suggests that in-context learning exhibits a learning bias for
simplicity in a way similar to humans.
Description
Thesis (Master's)--University of Washington, 2025
