Show simple item record

dc.contributor.advisorSteinert-Threlkeld, Shane
dc.contributor.authorLongwill, Benny Frank
dc.date.accessioned2021-03-19T22:56:09Z
dc.date.available2021-03-19T22:56:09Z
dc.date.submitted2020
dc.identifier.otherLongwill_washington_0250O_22450.pdf
dc.identifier.urihttp://hdl.handle.net/1773/46830
dc.descriptionThesis (Master's)--University of Washington, 2020
dc.description.abstractThis thesis presents a study that was designed to test the effect of generative adversarial network (GAN) training on the quality of natural language generation (NLG) using a pre-trained language model architecture: Bidirectional Encoder Representations from Transformers (BERT). Perplexity and BLEU scores were used as metrics for evaluation on 1000 samples of generated text. Results indicated that perplexity decreased and BLEU scores comparing the original data distributions increased; thus, there was evidence that quality of NLG was improved by the introduction of GAN training. This alternative training method may also be effective for other more state-of-the-art pre-trained architectures.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.rightsCC BY-SA
dc.subjectBERT
dc.subjectGPT
dc.subjectlanguage model
dc.subjectnatural language generation
dc.subjectnatural language processing
dc.subjectperplexity
dc.subjectComputer science
dc.subjectLinguistics
dc.subjectArtificial intelligence
dc.subject.otherLinguistics
dc.titleThe Suitability of Generative Adversarial Training for BERT Natural Language Generation
dc.typeThesis
dc.embargo.termsOpen Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record