Coordination of Vision and Body Movements For Prosthetic Control

dc.contributor.advisorRombokas, Eric
dc.contributor.advisorBurden, Sam
dc.contributor.authorRai, Vijeth
dc.date.accessioned2020-10-26T20:38:05Z
dc.date.available2020-10-26T20:38:05Z
dc.date.issued2020-10-26
dc.date.submitted2020
dc.descriptionThesis (Ph.D.)--University of Washington, 2020
dc.description.abstractPowered lower limbs are getting more capable in their hardware capacity but their control is still challenging. The state-of-the-art simplifies the control needs for the vast multitude of real-world activities by categorizing them into a handful of commonly encountered activity classes or ``modes'', such as flatground mode, stair ascent mode, etc. This dissertation aims to improve prosthetic lower limb control by focusing on the two challenges with using modes: 1) Estimating and handling the transitions between modes, and 2) Providing a larger repertoire of movements encompassing atypical and unstructured activities, such as side-shuffling and obstacle avoidance. Human locomotion exploits coordination between vision and body movements to efficiently navigate an environment. It is a continuous motion fluidly adapting to the environment and not always categorizable into modes. We draw our inspiration from these facets of human movement to address the two challenges. We introduce a novel Coordinated Movement (CM) controller capable of generating continuous movements for all the desired movements. We use vision to directly sense the environment and anticipate transitions in advance. Our CM controller exploits the strong inter-joint coordination exhibited in a typical movement to predict the trajectory of the prosthetic joint from the motion of the rest of the body. This novel approach \emph{unifies} all the desired movements into a single controller and generates continuous kinematic reference trajectories without explicit modes or transitions. The underlying deep-learning model can be easily re-trained with new movement data to facilitate expansion of the movement vocabulary. Our real-time tests of the CM controller sheds light on practical challenges encountered when moving from offline analysis to real-time hardware, while also providing avenues for future improvements. Vision sensors can be a window into the upcoming activities of the user to improve prosthetic transition performance. However, a significant bottleneck in employing vision classifiers to predict prosthetic mode labels is the resource-intensive process of manually labeling the data. This process is prone to subjective bias and limits the number of movements to a handful of typical modes. We introduce an unsupervised method to label training data which allows the natural movements to dictate the number and characteristics of the generated modes. We demonstrate that a neural network model trained on these auto-generated mode labels can predict terrain changes prior to the kinematic changes of the user. Higher accuracy on a limited training dataset, and better generalizability is achieved by leveraging the technique of transfer learning. The sensing and control strategies delineated in this dissertation can be applied towards a more natural experience of powered lower limbs.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherRai_washington_0250E_22228.pdf
dc.identifier.urihttp://hdl.handle.net/1773/46335
dc.language.isoen_US
dc.rightsCC BY
dc.subjectComputer Vision
dc.subjectProsthetic Control
dc.subjectRehabilitation Engineering
dc.subjectElectrical engineering
dc.subjectRobotics
dc.subjectBiomechanics
dc.subject.otherElectrical engineering
dc.titleCoordination of Vision and Body Movements For Prosthetic Control
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Rai_washington_0250E_22228.pdf
Size:
19.79 MB
Format:
Adobe Portable Document Format