Shape-preserving loss in deep learning for cell segmentation
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Attention Stats
Usage Stats
views
downloads
Series
Abstract
Fully convolutional networks (FCNs) have become the state-of-the-art models for cell instance segmentation in microscopy images. These networks are trained by minimizing a loss function, which typically defines the loss of each pixel separately and aggregates these pixel losses by averaging or summing. Since this pixel-wise definition of a loss function does not consider the spatial relations between the pixels’ predictions, it does not sufficiently impose the network to learn a particular shape(s). On the other hand, this ability of the network might be important for better segmenting cells, which commonly show similar morphological characteristics due to their natures. In response to this issue, this thesis introduces a new dynamic shape-preserving loss function to train an FCN for cell instance segmentation. This loss function is a weighted cross-entropy whose pixel weights are defined as prior-shape-aware. To this end, it calculates the weights based on the similarity between the shape of the segmented objects that the pixels belong to and the shape-priors estimated on the ground truth cells. This thesis uses Fourier descriptors to quantify the shape of a cell and proposes to define a similarity metric on the distribution of these Fourier descriptors. Working on four different medical image datasets, the experimental results demonstrate that this proposed loss function outperforms its counterpart for the segmentation of instances in these datasets.