טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentGuillaume Sicard
SubjectMachine Learning Methods for On-Line Adaptation of
Brain-Machine interfaces
DepartmentDepartment of Mechanical Engineering
Supervisor Professor Zacksenhouse Miriam
Full Thesis textFull thesis text - English Version


Abstract

Recent development in the field of brain-machine interfaces (BMIs) allowed disabled patients to control a robotic arm, regaining motor functions they lost due to spinal cord injury or degenerative neurological diseases. However such interfaces are prone to errors. The aim of this research is to provide new tools for kinematics decoding from neural signal in order to improve the next generations of BMIs by taking into account such errors and using them to improve the decoding of intended limb

movements.


To this end, the research conducted in this thesis has 3 main goals: i) to detect mistakes made by the robotic arm during its use by a patient (using a controlled cursor on the screen) using non-invasive interfaces like electroencephalography (EEG), ii) to assess whether or not hand kinematics can be decoded by non invasive signals such as electroencephalography (EEG), which is both suggested and criticized by recent research and iii) to implement an on-line decoder for invasive BMIs using a reinforcement learning (RL) paradigm.


The conclusion of this research shows that if error-related potential (ErrPs) indicative of a decoding error can be detected with an accuracy of 80%, state-of-the-art machine learning methods such as deep learning don’t improve the classification performance. Furthermore our results suggests that hand kinematics decoding from EEG does not lead to sufficiently good performances to be used in a BCI scheme: if linear decoding of hand kinematics from EEG does not seem feasible, the use of more complex approaches leads to above-chance performance in only some subjects, but the resulting performance is still too low for an on-line decoder to be implemented practically. As a result, prosthetic control from a neural interface can only be achieved with a sufficiently high level of performance using invasive interfaces. Such an interface was simulated using optimal feedback control (OFC) in order to perform the third part of this research: implementing a reinforcement learning (RL) based method for on-line decoding of hand kinematics using the cursor’s directional error as a reward signal.


Reinforcement learning was applied successfully for online decoding of neural activity during continuous reaching movement and was shown to be adaptive to neural reorganization and resilient to reward noise: post training performance remained high, reaching, on average, 93.2 ± 2.6% of the targets. The trained decoders captured well the preferred direction of the simulated neurons and rapidly adapted to changes in their tuning. Furthermore, training was successful even with noisy rewards as long as the training was not aborted too early, which suggests that on-line error detection might be a rele error detection might be a evant signal for the model update.