Self-supervised representation learning with graph neural networks for region of interest analysis in breast histopathology

Available
The embargo period has ended, and this item is now available.

Date

2020-12

Editor(s)

Advisor

Aksoy, Selim

Supervisor

Co-Advisor

Co-Supervisor

Instructor

BUIR Usage Stats
2
views
46
downloads

Series

Abstract

Deep learning has made a major contribution to histopathology image analysis with representation learning outperforming hand-crafted features. However, two notable challenges remain. The first is the lack of large histopathology datasets. The commonly used setting in deep learning-based approaches is supervised training of deep and wide models using large labeled datasets. Manually labeling histopathology images is a time-consuming operation. Assembling a large public dataset is also proven difficult due to privacy concerns. Second, the clinical practice in histopathology necessitates working with regions of interest of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, this thesis proposes a generic method that utilizes graph neural networks, combined with a self-supervised training method using a contrastive loss function. The regions of interest are modeled as graphs where vertices are fixed-sized patches cropped from the region. The proposed method has two stages. The first stage is patch-level representation learning using convolutional neural networks which concentrates on cell-level features. The second stage is region-level representation learning using graph neural networks using vertex dropout augmentation. The experiments using a challenging breast histopathology dataset show that the proposed method achieves better performance than the state-of-the-art in both classification and retrieval tasks. networks which can learn the tissue structure. Graph neural networks enable representing arbitrarily-shaped regions as graphs and encoding contextual information through message passing between neighboring patches. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. We propose using self-supervised learning to train graph neural networks using vertex dropout augmentation. The experiments using a challenging breast histopathology dataset show that the proposed method achieves better performance than the state-of-the-art in both classification and retrieval tasks.

Source Title

Publisher

Course

Other identifiers

Book Title

Degree Discipline

Computer Engineering

Degree Level

Master's

Degree Name

MS (Master of Science)

Citation

Published Version (Please cite this version)

Language

English

Type