Steinert-Threlkeld, ShaneHong, Jeongyeob2026-02-052026-02-052026-02-052025Hong_washington_0250O_29153.pdfhttps://hdl.handle.net/1773/55251Thesis (Master's)--University of Washington, 2025This paper explores the learning mechanism of a decoder-only transformer through the lens of human concept learning. We investigated whether decoder-only Transformers experience the simplicity bias, a human tendency to favor simpler representations. To do so, we create a pipeline that generates every task that a decoder-only transformer can learn and express with a given input symbol, length, and depth. Our initial results show no sufficient evidence for simplicity bias occurring in the autoregressive models. We end our paper with a discussion of other factors that can explain the learnability of transformers, such as the computational cost of each operation.application/pdfen-USCC BYLinguisticsCognitive psychologyComputer scienceLinguisticsLearnability of Autoregressive TransformersThesis