Browsing by Subject "Saliency detection"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Detection and classification of breast cancer in whole slide histopathology images using deep convolutional networks(2016-07) Geçer, BarışThe most frequent non-skin cancer type is breast cancer which is also named one of the most deadliest diseases where early and accurate diagnosis is critical for recovery. Recent medical image processing researches have demonstrated promising results that may contribute to the analysis of biopsy images by enhancing the understanding or by revealing possible unhealthy tissues during diagnosis. However, these studies focused on well-annotated and -cropped patches, whereas a fully automated computer-aided diagnosis (CAD) system requires whole slide histopathology image (WSI) processing which is, in fact, enormous in size and, therefore, difficult to process with a reasonable computational power and time. Moreover, those whole slide biopsies consist of healthy, benign and cancerous tissues at various stages and thus, simultaneous detection and classiffication of diagnostically relevant regions are challenging. We propose a complete CAD system for efficient localization and classification of regions of interest (ROI) in WSI by employing state-of-the-art deep learning techniques. The system is developed to resemble organized work ow of expert pathologists by means of progressive zooming into details, and it consists of two separate sequential steps: (1) detection of ROIs in WSI, (2) classification of the detected ROIs into five diagnostic classes. The novel saliency detection approach intends to mimic efficient search patterns of experts at multiple resolutions by training four separate deep networks with the samples extracted from the tracking records of pathologists' viewing of WSIs. The detected relevant regions are fed to the classification step that includes a deeper network that produces probability maps for classes, followed by a post-processing step for final diagnosis In the experiments with 240 WSI, the proposed saliency detection approach outperforms a state-of-the-art method by means of both efficiency and eectiveness, and the final classification of our complete system obtains slightly lower accuracy than the mean of 45 pathologists' performance. According to the Mc- Nemar's statistical tests, we cannot reject that the accuracies of 32 out of 45 pathologists are not different from the proposed system. At the end, we also provide visualizations of our deep model with several advanced techniques for better understanding of the learned features and the overall information captured by the networkItem Open Access Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks(Elsevier, 2018) Geçer, Barış; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists’ screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.Item Open Access Mean-shift analysis for image and video applications(2005) Cüce, Halil İbrahimIn this thesis, image and video analysis algorithms are developed. Tracking moving objects in video have important applications ranging from CCTV (Closed Circuit Television Systems) to infrared cameras. In current CCTV systems, 80% of the time, it is impossible to recognize suspects from the recorded scenes. Therefore, it is very important to get a close shot of a person so that his or her face is recognizable. To take high-resolution pictures of moving objects, a pan-tiltzoom camera should automatically follow moving objects and record them. In this thesis, a mean-shift based moving object tracking algorithm is developed. In ordinary mean-shift tracking algorithm a color histogram or a probability density function (pdf) estimated from image pixels is used to represent the moving object. In our case, a joint-probability density function is used to represent the object. The joint-pdf is estimated from the object pixels and their wavelet transform coefficients. In this way, relations between neighboring pixels, edge and texture information of the moving object are also represented because wavelet coefficients are obtained after high-pass filtering. Due to this reason the new tracking algorithm is more robust than ordinary mean-shift tracking using only color information. A new content based image retrieval (CBIR) system is also developed in this thesis. The CBIR system is based on mean-shift analysis using a joint-pdf. In this system, the user selects a window in an image or an entire image and queries similar images stored in a database. The selected region is represented using a joint-pdf estimated from image pixels and their wavelet transform coefficients. The retrieval algorithm is more reliable compared to other CBIR systems using only color information or only edge or texture information because the jointpdf based approach represents both texture, edge and color information. The proposed method is also computationally efficient compared to sliding-window based retrieval systems because the joint-pdfs are compared in non-overlapping windows. Whenever there is a reasonable amount of match between the queried window and the original image window then a mean-shift analysis is started.