Show simple item record

dc.contributor.advisorFox, Dieter
dc.contributor.authorSchenck, Connor
dc.date.accessioned2019-02-22T17:04:28Z
dc.date.available2019-02-22T17:04:28Z
dc.date.submitted2018
dc.identifier.otherSchenck_washington_0250E_19357.pdf
dc.identifier.urihttp://hdl.handle.net/1773/43352
dc.descriptionThesis (Ph.D.)--University of Washington, 2018
dc.description.abstractLiquids are an important part of everyday human environments. We use them for common tasks such as pouring coffee, mixing ingredients for a recipe, or washing hands. For a robot to operate effectively on such tasks, it must be able to robustly handle liquids. In this thesis, we investigate ways in which robots can overcome some of the challenges inherent in interacting with liquids. We investigate how robots can perceive, reason about, and manipulate liquids. We split this research into two parts. The first part focuses on investigating how learning- based methods can be used to solve tasks involving liquids. The second part focuses on how model-based methods may be used and how learning- and model-based methods may be combined. In the first part of this thesis we investigate how deep learning can be adapted to tasks involving liquids. We develop several deep network architectures for the task of detection, a liquid perception task wherein the robot must label pixels in its color camera as liquid or not- liquid. Our results show that networks able to integrate temporal information have superior performance to those that do not, indicating that this may be necessary for the perception of translucent liquids. Additionally, we apply our network architectures to the related task of tracking, a liquid reasoning task where the robot must identify the pixel locations of all liquid, seen and unseen, in an image based on its learned knowledge of liquid physics. Our results show that the best performing network was one with an explicit memory, suggesting that liquid reasoning tasks may be easier to solve when passing explicit state information forward in time. Finally, we apply our deep learning architectures to the task of pouring specific amounts of liquid, a manipulation task requiring precise control. The results show that by using our deep neural networks, the robot was able to pour specific amounts of liquid using only RGB feedback. In the second part of this thesis we investigate model-based methods for robotic inter- action with liquids. Specifically, we focus on physics-based models that incorporate fluid dynamics algorithms. We show how a robot can use a liquid simulator to track the 3D state of liquid over time. By using a strong model, the robot is able to reason in two entirely different contexts using the exact same algorithm: in one case, about the amount of water in a container during a pour action, in the other, about a blockage in an opaque pipe. We extend our strong, physics-based liquid model by creating SPNets. SPNets is an implementation of fluid dynamics with deep learning tools, allowing it to be seamlessly integrated with deep networks as well as enabling fully differentiable fluid dynamics. Our results show that the gradients produced from this model can be used to discover fluid parameters (e.g., viscosity, cohesion) from data, precisely control liquids to move them to desired poses, and train policies directly from the model. We also show how this can be integrated with deep networks to perceive and track the 3D liquid state. To summarize, this thesis investigates both learning-based and model-based approaches to robotic interaction with liquids. Our results with deep learning, a learning-based approach, show that deep neural networks are proficient at learning to perceive liquids from raw sensory data and at learning basic physical properties of liquids. Our results with liquid simulation, a model-based approach, show that physics-based models are very good at generalizing to a wide variety of tasks. And finally out results with combining these two show how the generalizability of models may be combined with the adaptability of deep learning to enable the application of several robotics methodologies.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.rightsCC BY
dc.subjectArtificial Intelligence
dc.subjectDeep Learning
dc.subjectFluid Dynamics
dc.subjectMachine Learning
dc.subjectRobotics
dc.subjectRobotics
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.subject.otherComputer science and engineering
dc.titleLiquids & Robots: An Investigation of Techniques for Robotic Interaction with Liquids
dc.typeThesis
dc.embargo.termsOpen Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record