Browsing by Subject "Convolutional Neural Networks"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Deep learning based cell segmentation in histopathological images(2018-08) Doğan, DenizIn digital pathology, cell imaging systems allow us to comprehend histopathological events at the cellular level. The first step in these systems is generally cell segmentation, which substantially affects the subsequent steps for an effective and reliable analysis of histopathological images. On the other hand, cell segmentation is a challenging task in histopathological images where there are cells with different pixel intensities and morphological characteristics. The approaches that integrate both pixel intensity and morphological characteristics of cells are likely to achieve successful segmentation results. This thesis proposes a deep learning based approach for a reliable segmentation of cells in the images of histopathological tissue samples stained with the routinely used hematoxylin and eosin technique. This approach introduces two stage convolutional neural networks that employ pixel intensities in the first stage and morphological cell features in the second stage. The proposed TwoStageCNN method is based on extracting cell features, related to cell morphology, from the class labels and posteriors generated in the first stage and uses the morphological cell features in the second stage for the final segmentation. We evaluate the proposed approach on 3428 cells and the experimental results show that our approach yields better segmentation results compared to different segmentation techniques.Item Open Access Fast and efficient model parallelism for deep convolutional neural networks(2019-08) Eserol, BurakConvolutional Neural Networks (CNNs) have become very popular and successful in recent years. Increasing the depth and number of parameters of CNNs has crucial importance on this success. However, it is hard to t deep convolutional neural networks into a single machine's memory and it takes a very long time to train these deep convolutional neural networks. There are two parallelism methods to solve this problem: data parallelism and model parallelism. In data parallelism, the neural network model is replicated among different machines and data is partitioned among them. Each replica trains its data and communicates parameters and their gradients with other replicas. This process results in a huge communication volume in data parallelism, which slows down the training and convergence of the deep neural network. In model parallelism, a deep neural network model is partitioned among different machines and trained in a pipelined manner. However, it requires a human expert to partition the network and it is hard to obtain low communication volume as well as a low computational load balance ratio by using known partitioning methods. In this thesis, a new model parallelism method called hypergraph partitioned model parallelism is proposed. It does not require a human expert to partition the network and obtains a better computational load balance ratio along with better communication volume compared to the existing model parallelism techniques. Besides, the proposed method also reduces the communication volume overhead in data parallelism by 93%. Finally, it is also shown that distributing a deep neural network using the proposed hypergraph partitioned model rather than the existing parallelism methods causes the network to converge faster to the target accuracy.Item Open Access Improved artificial neural network training with advanced methods(2018-09) Çatalbaş, BurakArtificial Neural Networks (ANNs) are used for different machine learning tasks such as classification, clustering etc. They have been utilized in important tasks and offering new services more and more in our daily lives. Learning capabilities of these networks have accelerated significantly since 2000s, with the increasing computational power and data amount. Therefore, research conducted on these networks is renamed as Deep Learning, which emerged as a major research area - not only in the neural networks, but also in the Machine Learning discipline. For such an important research field, the techniques used in the training of these networks can be seen as keys for more successful results. In this work, each part of this training procedure is investigated by using of different and improved - sometimes new - techniques on convolutional neural networks which classify grayscale and colored image datasets. Advanced methods included the ones from the literature such as He-truncated Gaussian initialization. In addition, our contributions to the literature include ones such as SinAdaMax Optimizer, Dominantly Exponential Linear Unit (DELU), He-truncated Laplacian initialization and Pyramid Approach for Max-Pool layers. In the chapters of this thesis, success rates are increased with the addition of these advanced methods accumulatively, especially with DELU and SinAdaMax which are our contributions as upgraded methods. In result, success rate thresholds for different datasets are met with simple convolutional neural networks - which are improved with these advanced methods and reached promising test success increases - within 15 to 21 hours (typically less than a day). Thus, better performances are obtained by those different and improved techniques are shown using well-known classification datasets.