|M.Sc Student||Polinsky Shunit|
|Subject||R2Gmotion:an IMU-Based Approach for Controlling|
Transradial Prosthesese by Learning the Kinematics
of Reach-to-Grasp Movement
|Department||Department of Mechanical Engineering||Supervisor||PROF. Alon Wolf|
|Full Thesis text|
The human body is a very efficient system comprised of endless different components that are working together in inspiring coordination and synergy. Recruiting this principle for the natural reach-to-grasp (R2G) movement is the cornerstone of this work towards the development of an intelligent prosthetic hand that can potentially learn the body language of R2G patterns and applies a smoother and more dynamic way of action. In comparison to previous studies, here we acquired and decoded the time-dependent R2G movement, rather than classify several discrete need-to-learn user gestures as most control strategies are based on. Our hypothesis is that grasp intention can be predicted and can be distinguished from other hand movements in people with transradial amputation by analyzing the kinematics of the upper-body segments, thanks to the complexity and synergy of the R2G movement. Since the end-goal is to control transradial prostheses we assumed we do not have any kinematic data from below the elbow. For this purpose, we developed a low-cost wearable IMU-based system that acquires the upper limb and torso kinematics, not including pronation-supination of the palm. Ten healthy participants were recorded on two different days by our system, as well as by an optoelectronic system for validation. Each participant accomplished different R2G tasks of different objects from different and random places at random times according to visual and vocal instructions managed by the application we built, while sitting and standing (total of ~3.5k R2G movements). Also, the participants were recorded while walking and talking to mimic other hand movements to validate the robustness of our algorithm when differentiating R2G movement from other tasks. The data analysis was done by comparing different types of machine learning architectures and different input data types such as varied time-window samples and kinematics characteristics. The selected machine learning model is a combination of a classifier that detects if an R2G movement is currently occurring (median results: 89.25% sensitivity and 85.28% specificity) and a regression estimator that estimates the current stage of the R2G movement process (median results: 0.031 mean-square error, for an output range of [0,1]). According to the model outputs and customized parameters, the system identifies the grasp moment (for selected parameters: 1.5% false positive, and 92.8% true positive from which 23.8% considered as early-detection) and triggers the low-cost 3d printed prosthetic hand we built to accomplish the action. In conclusion, we present here a novel concept for controlling prosthetic hands that takes advantage of the arm reaching movement that naturally occurs before almost any grasp. This concept leads to a better starting point for prosthetic hand control and potentially enables a low control burden for sensors-fused-systems. Our R2Gmotion, a wearable, real-time system based on low-cost and accessible components, ready to be implemented, can be easily reproduced and be used as a stand-alone control system or as a platform that other methods can be implemented on top of it to construct a robust fused control system for prosthetic hands.