Gaze2Grasp: Vision-based system for pre-grasp prosthesis control
Loading...
Date
Authors
Karrenbach, Maxim Amon
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Users of prosthetic limbs face many challenges operating their devices in everyday tasks. The many aspects of doing tasks like stair descent and object grasping are largely done subconsciously in a person without any limb impairment. A person with a limb amputation or impairment bears the full weight of this cognitive load, and are often forced to deal with the limitations of the prosthesis by employing compensation strategies, like "overhanging toe" in stair descent and shoulder compensations in grasping tasks. My research aims to ease the burden placed on prosthetic limb users by using the learning capabilities from data-driven methods of neural networks particularly in object grasping. Gaze2Grasp is a machine learning algorithm that uses eye-tracked gaze to predict preferable wrist rotations of a virtual prosthesis. By using an input of vision and providing outputs of wrist rotations, Gaze2Grasp could be implemented in any prosthetic controller with wrist degrees of freedom, and shows the value of vision in object grasping. The algorithm may also be useful for other applications like remote-operated robotic grasping systems.
Description
Thesis (Master's)--University of Washington, 2021
