Automated Discovery and Learning of Complex Movement Behaviors
MetadataShow full item record
In order to create useful physical robots, tell narratives in animated films and interactive games, or understand the underlying principles behind human movement, it is necessary to synthesize movement behaviors with the same wide variety, richness and complexity observed in humans and other animals. Moreover, these behaviors should be discovered automatically from only a few core principles, and not be a result of extensive manual engineering or merely a mimicking of demonstrations. In this thesis, I develop novel optimal control methods and apply large-scale neural network training to synthesize movement trajectories and interactive controllers that give rise to a range of behaviors such getting up from the ground, climbing, moving heavy objects, hand manipulation, acrobatics, and various cooperative actions involving multiple characters and their manipulation of the environment. Coupled with detailed models of human physiology, motions that match various kinematic and dynamic aspects of real human motion can be produced de novo, giving the predictive power to conduct virtual biomechanics experiments. The resulting movements are also used to successfully control a physical bipedal robot. The approach is fully automatic and does not require domain knowledge specific to each behavior, pre-existing examples or motion capture data. Although discovery and learning are computationally-expensive and rely on cloud and GPU computing, the interactive animation can run in real-time on any hardware once the controllers are learned.