|M.Sc Student||Oran Shayer|
|Subject||A Deep learning Approach for Generic Image Segmentation|
|Department||Department of Electrical Engineering||Supervisor||Full Professor Lindenbaum Michael|
|Full Thesis text|
Recent advances in deep learning and convolutional neural networks (CNNs) have had a profound impact on almost every computer vision task. However, generic (non-semantic) image segmentation is a notable exception despite it being one of the most fundamental and widely studied tasks in this field. In this work, we revisit the generic segmentation task and propose Deep Generic Segmentation (DGS) -- a new deep learning approach combined with conditional random fields (CRFs).
We start by seeking pixel-wise representations whose learned features will be able to separate pixels from different segments. We describe two possible learning methods for finding these representations: the first uses triplet loss and the other relies on a new supervised learning task. To examine the usefulness of these representations for the task of generic segmentation, we suggest a new segmentation algorithm. Our algorithm differs significantly from previous popular segmentation algorithms and consists of two main stages: a segment seed generation stage, and a CRF for the final processing stage. We propose to use our learned representations through several stages of the algorithm. We tested our representations and segmentation method on BSDS500 and Pascal Context. We show that we are able to learn meaningful representations for the context of segmentations and that the representations themselves achieve state-of-the-art segment similarity scores. We did not achieve optimal results on the generic segmentation task, but present promising and competitive results using this method.