Choi, YejinQin, Lianhui2023-08-142023-08-142023-08-142023Qin_washington_0250E_25916.pdfhttp://hdl.handle.net/1773/50296Thesis (Ph.D.)--University of Washington, 2023This thesis aims to establish a connection between reasoning and language generation. Today's language models (LMs, such as GPT-3), despite producing human-like fluent text, essentially act like "a mouth without a brain" -- They generate without grounding on the world knowledge, and lack the ability to flexibly reason about everyday situations and events, including counterfactual ("what if?") and abductive ("what might explain the observations?") reasoning. This thesis bridges the gap from three angles: (1) Differentiable reasoning with constraints: Humans can incorporate any constraints from the context on the fly and conduct reasoning in new situations without the need of specific training. I develop a unified inference framework that endows the LMs with the flexibility and efficiency, through a differentiable process to reason over the vast space of discrete language, combined with arbitrary neural and symbolic constraints; (2) Counterfactual and nonmonotonic reasoning in natural language: I establish the first formulation of counterfactual reasoning in language, and used my inference tool to enable the common monotonic LMs for the capabilities of nonmonotonic reasoning ranging from counterfactual, abductive, and temporal reasoning in complex context; (3) Integration of knowledge and logic in neural language models: I develop mechanisms of integrating rich external knowledge and structures with the neural LMs, to ground and boost the reasoning abilities.application/pdfen-USCC BYcounterfactual reasoningenergy-based modelingknowledge groundingmachine reasoningnatural language processingtext generationArtificial intelligenceComputer science and engineeringConstrained, Causal, and Knowledge-Grounded Reasoning for Neural Language GenerationThesis