AttentionBoost: learning what to attend for gland segmentation in histopathological images by boosting fully convolutional networks
Date
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Citation Stats
Series
Abstract
Fully convolutional networks (FCNs) are widely used for instance segmentation. One important challenge is to sufficiently train these networks to yield good generalizations for hard-to-learn pixels, correct prediction of which may greatly affect the success. A typical group of such hard-to-learn pixels are boundaries between instances. Many studies have developed strategies to pay more attention to learning these boundary pixels. They include designing multi-task networks with an additional task of boundary prediction and increasing the weights of boundary pixels' predictions in the loss function. Such strategies require defining what to attend beforehand and incorporating this defined attention to the learning model. However, there may exist other groups of hard-to-learn pixels and manually defining and incorporating the appropriate attention for each group may not be feasible. In order to provide an adaptable solution to learn different groups of hard-to-learn pixels, this article proposes AttentionBoost, which is a new multi-attention learning model based on adaptive boosting, for the task of gland instance segmentation in histopathological images. AttentionBoost designs a multi-stage network and introduces a new loss adjustment mechanism for an FCN to adaptively learn what to attend at each stage directly on image data without necessitating any prior definition. This mechanism modulates the attention of each stage to correct the mistakes of previous stages, by adjusting the loss weight of each pixel prediction separately with respect to how accurate the previous stages are on this pixel. Working on histopathological images of colon tissues, our experiments demonstrate that the proposed AttentionBoost model improves the results of gland segmentation compared to its counterparts.