Scalable Methods and Expressive Models for Planning Under Uncertainty

dc.contributor.advisorMausam, Mausamen_US
dc.contributor.authorKolobov, Andreyen_US
dc.date.accessioned2013-07-25T17:51:38Z
dc.date.available2013-07-25T17:51:38Z
dc.date.issued2013-07-25
dc.date.submitted2013en_US
dc.descriptionThesis (Ph.D.)--University of Washington, 2013en_US
dc.description.abstractThe ability to plan in the presence of uncertainty about the effects of one's own actions and the events of the environment is a core skill of a truly intelligent agent. This type of sequential decision-making has been modeled by Markov Decision Processes (MDPs), a framework known since at least the 1950's. The importance of MDPs is not merely philosophic --- they have been applied to several impactful real-world scenarios, from inventory management to military operations planning. Nonetheless, the adoption of MDPs in practice is greatly hampered by two aspects. First, modern algorithms for solving them are still not scalable enough to handle many realistically-sized problems. Second, the MDP classes we know how to solve tend to be restrictive, often failing to model significant aspects of the planning task at hand. As a result, many probabilistic scenarios fall outside of MDPs' scope. The research presented in this dissertation addresses both of these challenges. Its first contribution is several highly scalable approximation algorithms for existing MDP classes that combine two major planning paradigms, dimensionality reduction and deterministic relaxation. These approaches automatically extract human-understandable causal structure from an MDP and use this structure to efficiently compute a good MDP policy. Besides enabling us to handle larger planning scenarios, they bring us closer to the ideal of AI --- building agents that autonomously recognize features important for solving a problem. While these techniques are applicable only to goal-oriented scenarios, this dissertation also introduces approximation algorithms for reward-oriented settings. The second contribution of this work is new MDP classes that take into account previously ignored aspects of planning scenarios, e.g., the possibility of catastrophic failures. The thesis explores their mathematical properties and proposes algorithms for solving these problems.en_US
dc.embargo.termsNo embargoen_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.otherKolobov_washington_0250E_11754.pdfen_US
dc.identifier.urihttp://hdl.handle.net/1773/23481
dc.language.isoen_USen_US
dc.rightsCopyright is held by the individual authors.en_US
dc.subjectAbstraction; Basis Function; Dead end; Markov Decision Process; Planning Under Uncertainty; Stochastic Shortest-Path MDPen_US
dc.subject.otherComputer scienceen_US
dc.subject.otherComputer engineeringen_US
dc.subject.otherOperations researchen_US
dc.subject.othercomputer science and engineeringen_US
dc.titleScalable Methods and Expressive Models for Planning Under Uncertaintyen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Kolobov_washington_0250E_11754.pdf
Size:
1.86 MB
Format:
Adobe Portable Document Format