Positive AI with Social Commonsense Models
| dc.contributor.advisor | Choi, Yejin | |
| dc.contributor.advisor | Smith, Noah A | |
| dc.contributor.author | Sap, Maarten | |
| dc.date.accessioned | 2021-10-29T16:20:08Z | |
| dc.date.available | 2021-10-29T16:20:08Z | |
| dc.date.issued | 2021-10-29 | |
| dc.date.submitted | 2021 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2021 | |
| dc.description.abstract | To effectively understand language and safely communicate with humans, machines must not only grasp the surface meanings of texts, but also their underlying social meaning. This requires understanding interpersonal social commonsense, such as knowing to thank someone for giving you a present, as well as accounting for harmful social biases and stereotypes. While understanding these implied social dynamics is easy for most humans, it remains an elusive goal for AI and NLP systems. Importantly, systems that fail to account for these social and power dynamics risk producing redundant, rude, or even harmful outputs. In this dissertation, we take several steps towards making NLP systems more human-centric, socially aware, and equity driven, motivated by the increased prowess and prevalence of AI and NLP technology. In the first part, we investigate methods for enabling NLP systems to reason about and revise the commonsense implications of text. We introduce ATOMIC, the first largescale social commonsense knowledge graph for machines to reason about the causes and effects of everyday situations, and POWERTRANSFORMER, a system to revise the social implications of text using connotation frames of power and agency. In the second part, we tackle the problem of detecting and representing social biases and toxicity in language with socially aware NLP models. We examine shortcomings of existing toxic language detection tools, uncovering strong racial biases which causes text written by African American authors to be flagged as toxic more often than by white authors. Then, we introduce SOCIAL BIAS FRAMES, a new structured linguistic representation for distilling the harmful or biased implications of text in free-text explanations. We conclude by the discussing the contributions of this dissertation as well as future directions towards improving the social awareness and equity of NLP systems. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Sap_washington_0250E_23365.pdf | |
| dc.identifier.uri | http://hdl.handle.net/1773/47999 | |
| dc.language.iso | en_US | |
| dc.rights | CC BY-NC-SA | |
| dc.subject | Commonsense reasoning | |
| dc.subject | language connotations | |
| dc.subject | social biases | |
| dc.subject | toxic language | |
| dc.subject | Artificial intelligence | |
| dc.subject | Computer science | |
| dc.subject | Linguistics | |
| dc.subject.other | Computer science and engineering | |
| dc.title | Positive AI with Social Commonsense Models | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Sap_washington_0250E_23365.pdf
- Size:
- 5.21 MB
- Format:
- Adobe Portable Document Format
