Browsing by Author "Elmore, J. G."
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access Deep feature representations for variable-sized regions of ınterest in breast histopathology(IEEE, 2021) Mercan, Caner; Aygüneş, Bulut; Aksoy, Selim; Mercan, Ezgi; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Objective: Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. Methods: First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. Results: Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. Conclusion: The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. Significance: The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.Item Open Access Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks(Elsevier, 2018) Geçer, Barış; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists’ screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.Item Open Access From patch-level to ROI-level deep feature representations for breast histopathology classification(SPIE, 2019) Mercan, Caner; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.; Tomaszewski, J. E.; Ward, A. D.We propose a framework for learning feature representations for variable-sized regions of interest (ROIs) in breast histopathology images from the convolutional network properties at patch-level. The proposed method involves fine-tuning a pre-trained convolutional neural network (CNN) by using small fixed-sized patches sampled from the ROIs. The CNN is then used to extract a convolutional feature vector for each patch. The softmax probabilities of a patch, also obtained from the CNN, are used as weights that are separately applied to the feature vector of the patch. The final feature representation of a patch is the concatenation of the class-probability weighted convolutional feature vectors. Finally, the feature representation of the ROI is computed by average pooling of the feature representations of its associated patches. The feature representation of the ROI contains local information from the feature representations of its patches while encoding cues from the class distribution of the patch classification outputs. The experiments show the discriminative power of this representation in a 4-class ROI-level classification task on breast histopathology slides where our method achieved an accuracy of 66.8% on a data set containing 437 ROIs with different sizes.Item Open Access Localization of diagnostically relevant regions of interest in whole slide images(IEEE, 2014-08) Mercan, E.; Aksoy, Selim; Shapiro, L. G.; Weaver, D. L.; Brunye, T.; Elmore, J. G.Whole slide imaging technology enables pathologists to screen biopsy images and make a diagnosis in a digital form. This creates an opportunity to understand the screening patterns of expert pathologists and extract the patterns that lead to accurate and efficient diagnoses. For this purpose, we are taking the first step to interpret the recorded actions of world-class expert pathologists on a set of digitized breast biopsy images. We propose an algorithm to extract regions of interest from the logs of image screenings using zoom levels, time and the magnitude of panning motion. Using diagnostically relevant regions marked by experts, we use the visual bag-of-words model with texture and color features to describe these regions and train probabilistic classifiers to predict similar regions of interest in new whole slide images. The proposed algorithm gives promising results for detecting diagnostically relevant regions. We hope this attempt to predict the regions that attract pathologists' attention will provide the first step in a more comprehensive study to understand the diagnostic patterns in histopathology.Item Open Access Localization of diagnostically relevant regions of interest in whole slide images: a comparative study(Springer New York LLC, 2016-08) Mercan, E.; Aksoy, S.; Shapiro, L. G.; Weaver, D. L.; Brunyé, T. T.; Elmore, J. G.Whole slide digital imaging technology enables researchers to study pathologists’ interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists’ actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors. © 2016, Society for Imaging Informatics in Medicine.Item Open Access Multi-instance multi-label learning for multi-class classification of whole slide breast histopathology images(Institute of Electrical and Electronics Engineers, 2018) Mercan, C.; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Digital pathology has entered a new era with the availability of whole slide scanners that create the high-resolution images of full biopsy slides. Consequently, the uncertainty regarding the correspondence between the image areas and the diagnostic labels assigned by pathologists at the slide level, and the need for identifying regions that belong to multiple classes with different clinical significances have emerged as two new challenges. However, generalizability of the state-of-the-art algorithms, whose accuracies were reported on carefully selected regions of interest (ROIs) for the binary benign versus cancer classification, to these multi-class learning and localization problems is currently unknown. This paper presents our potential solutions to these challenges by exploiting the viewing records of pathologists and their slide-level annotations in weakly supervised learning scenarios. First, we extract candidate ROIs from the logs of pathologists' image screenings based on different behaviors, such as zooming, panning, and fixation. Then, we model each slide with a bag of instances represented by the candidate ROIs and a set of class labels extracted from the pathology forms. Finally, we use four different multi-instance multi-label learning algorithms for both slide-level and ROI-level predictions of diagnostic categories in whole slide breast histopathology images. Slide-level evaluation using 5-class and 14-class settings showed average precision values up to 81% and 69%, respectively, under different weakly labeled learning scenarios. ROI-level predictions showed that the classifier could successfully perform multi-class localization and classification within whole slide images that were selected to include the full range of challenging diagnostic categories.Item Open Access Multi-instance multi-label learning for whole slide breast histopathology(International Society for Optical Engineering SPIE, 2016-02-03) Mercan, Caner; Mercan, E.; Aksoy, Selim; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Digitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were combined for each slide. © 2016 SPIE.