Coordination of Vision and Body Movements For Prosthetic Control

Loading...
Thumbnail Image

Authors

Rai, Vijeth

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Powered lower limbs are getting more capable in their hardware capacity but their control is still challenging. The state-of-the-art simplifies the control needs for the vast multitude of real-world activities by categorizing them into a handful of commonly encountered activity classes or ``modes'', such as flatground mode, stair ascent mode, etc. This dissertation aims to improve prosthetic lower limb control by focusing on the two challenges with using modes: 1) Estimating and handling the transitions between modes, and 2) Providing a larger repertoire of movements encompassing atypical and unstructured activities, such as side-shuffling and obstacle avoidance. Human locomotion exploits coordination between vision and body movements to efficiently navigate an environment. It is a continuous motion fluidly adapting to the environment and not always categorizable into modes. We draw our inspiration from these facets of human movement to address the two challenges. We introduce a novel Coordinated Movement (CM) controller capable of generating continuous movements for all the desired movements. We use vision to directly sense the environment and anticipate transitions in advance. Our CM controller exploits the strong inter-joint coordination exhibited in a typical movement to predict the trajectory of the prosthetic joint from the motion of the rest of the body. This novel approach \emph{unifies} all the desired movements into a single controller and generates continuous kinematic reference trajectories without explicit modes or transitions. The underlying deep-learning model can be easily re-trained with new movement data to facilitate expansion of the movement vocabulary. Our real-time tests of the CM controller sheds light on practical challenges encountered when moving from offline analysis to real-time hardware, while also providing avenues for future improvements. Vision sensors can be a window into the upcoming activities of the user to improve prosthetic transition performance. However, a significant bottleneck in employing vision classifiers to predict prosthetic mode labels is the resource-intensive process of manually labeling the data. This process is prone to subjective bias and limits the number of movements to a handful of typical modes. We introduce an unsupervised method to label training data which allows the natural movements to dictate the number and characteristics of the generated modes. We demonstrate that a neural network model trained on these auto-generated mode labels can predict terrain changes prior to the kinematic changes of the user. Higher accuracy on a limited training dataset, and better generalizability is achieved by leveraging the technique of transfer learning. The sensing and control strategies delineated in this dissertation can be applied towards a more natural experience of powered lower limbs.

Description

Thesis (Ph.D.)--University of Washington, 2020

Citation

DOI