Language Models can Generalize from Indirect Evidence: Evidence from Filtered Corpus Training (FICT)

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis introduces Filtered Corpus Training, a method that trains language models (LMs) on corpora with certain linguistic constructions filtered out from the training data, and uses it to measure the ability of LMs to perform linguistic generalization on the basis of indirect evidence. Applying the method to both LSTM and Transformer LMs, of roughly comparable size, we develop corpora filtered of direct evidence for a wide range of linguistic phenomena. Our results show that while transformers are better qua LMs (as measured by perplexity), both models perform equally and surprisingly well on linguistic generalization measures, suggesting that they are capable of generalizing from indirect evidence. This adds to a growing body of evidence on the limitations of perplexity as an evaluation metric, while also showing that direct attestation may be not strictly be necessary for learners to develop the appropriate linguistic generalizations.

Description

Thesis (Master's)--University of Washington, 2024

Citation

DOI

Collections