Multi-instance multi-label learning for whole slide breast histopathology

dc.citation.epage979108-11en_US
dc.citation.spage979108-1en_US
dc.contributor.authorMercan, Caneren_US
dc.contributor.authorMercan, E.en_US
dc.contributor.authorAksoy, Selimen_US
dc.contributor.authorShapiro, L. G.en_US
dc.contributor.authorWeaver, D. L.en_US
dc.contributor.authorElmore, J. G.en_US
dc.coverage.spatialSan Diego, California, United States
dc.date.accessioned2018-04-12T11:46:50Z
dc.date.available2018-04-12T11:46:50Z
dc.date.issued2016-02-03en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionDate of Conference: 27 February–3 March 2016
dc.descriptionConference name: SPIE Medical Imaging, 2016
dc.description.abstractDigitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were combined for each slide. © 2016 SPIE.en_US
dc.identifier.doi10.1117/12.2216458en_US
dc.identifier.urihttp://hdl.handle.net/11693/37652
dc.language.isoEnglishen_US
dc.publisherInternational Society for Optical Engineering SPIEen_US
dc.relation.isversionofhttp://dx.doi.org/10.1117/12.2216458en_US
dc.source.titleMedical Imaging 2016: Digital Pathologyen_US
dc.subjectBreast histopathologyen_US
dc.subjectDigital pathologyen_US
dc.subjectImage classificationen_US
dc.subjectMulti-instance multi-label learningen_US
dc.subjectRegion of interest analysisen_US
dc.subjectWhole slide imagingen_US
dc.subjectComputer aided analysisen_US
dc.subjectComputer aided diagnosisen_US
dc.titleMulti-instance multi-label learning for whole slide breast histopathologyen_US
dc.typeConference Paperen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Multi_instance_multi_label_learning_for_whole_slide_breast_histopathology.pdf
Size:
1.71 MB
Format:
Adobe Portable Document Format
Description: