|M.Sc Student||Ornstein Hannah|
|Subject||Automatic Classification of Echocardiogram Orientation using|
Neural Network Based Methods
|Department||Department of Biomedical Engineering||Supervisor||Professor Emeritus Dan Adam|
|Full Thesis text|
The standard views in echocardiography capture distinct slices of the heart which can be used to assess cardiac function. Determining the view of a given echocardiogram is the first step for analysis. To automate this step, a deep network of the ResNet-18 architecture was used to classify between six standard views. The network parameters were pre-trained with the ImageNet database and prediction quality was assessed with a visualization tool known as gradient-weighted class activation mapping (Grad-CAM). The network was able to distinguish between short axis and long axis views to ~99% accuracy. 10-fold cross validation showed a 97-98% accuracy for the long axis subcategories (which included two chamber, three chamber, and four chamber views). Grad-CAM images of these views highlighted features that were similar to those used by experts in manual classification. Short axis subcategories (which included apical, mitral valve, and papillary muscle views) had accuracies of 54-73%. Grad-CAM images illustrate that the network classifies most short axis views as belonging to the papillary muscle level. Likely more images and incorporating time-dependent features would increase the short axis view accuracy. Overall, deep networks can be used to reliably classify echocardiogram views.