Positive AI with Social Commonsense Models

dc.contributor.advisorChoi, Yejin
dc.contributor.advisorSmith, Noah A
dc.contributor.authorSap, Maarten
dc.date.accessioned2021-10-29T16:20:08Z
dc.date.available2021-10-29T16:20:08Z
dc.date.issued2021-10-29
dc.date.submitted2021
dc.descriptionThesis (Ph.D.)--University of Washington, 2021
dc.description.abstractTo effectively understand language and safely communicate with humans, machines must not only grasp the surface meanings of texts, but also their underlying social meaning. This requires understanding interpersonal social commonsense, such as knowing to thank someone for giving you a present, as well as accounting for harmful social biases and stereotypes. While understanding these implied social dynamics is easy for most humans, it remains an elusive goal for AI and NLP systems. Importantly, systems that fail to account for these social and power dynamics risk producing redundant, rude, or even harmful outputs. In this dissertation, we take several steps towards making NLP systems more human-centric, socially aware, and equity driven, motivated by the increased prowess and prevalence of AI and NLP technology. In the first part, we investigate methods for enabling NLP systems to reason about and revise the commonsense implications of text. We introduce ATOMIC, the first largescale social commonsense knowledge graph for machines to reason about the causes and effects of everyday situations, and POWERTRANSFORMER, a system to revise the social implications of text using connotation frames of power and agency. In the second part, we tackle the problem of detecting and representing social biases and toxicity in language with socially aware NLP models. We examine shortcomings of existing toxic language detection tools, uncovering strong racial biases which causes text written by African American authors to be flagged as toxic more often than by white authors. Then, we introduce SOCIAL BIAS FRAMES, a new structured linguistic representation for distilling the harmful or biased implications of text in free-text explanations. We conclude by the discussing the contributions of this dissertation as well as future directions towards improving the social awareness and equity of NLP systems.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherSap_washington_0250E_23365.pdf
dc.identifier.urihttp://hdl.handle.net/1773/47999
dc.language.isoen_US
dc.rightsCC BY-NC-SA
dc.subjectCommonsense reasoning
dc.subjectlanguage connotations
dc.subjectsocial biases
dc.subjecttoxic language
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.subjectLinguistics
dc.subject.otherComputer science and engineering
dc.titlePositive AI with Social Commonsense Models
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sap_washington_0250E_23365.pdf
Size:
5.21 MB
Format:
Adobe Portable Document Format