Novel deep learning algorithms for multi-modal medical image synthesis

buir.advisorÇukur, Tolga
dc.contributor.authorDalmaz, Onat
dc.date.accessioned2023-08-04T10:31:10Z
dc.date.available2023-08-04T10:31:10Z
dc.date.copyright2023-08
dc.date.issued2023-08
dc.date.submitted2023-08-02
dc.departmentDepartment of Electrical and Electronics Engineering
dc.descriptionCataloged from PDF version of article.
dc.descriptionThesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2023.
dc.descriptionIncludes bibliographical references (leaves 91-116).
dc.description.abstractMulti-modal medical imaging is a powerful tool for diagnosis and treatment of various diseases, as it provides complementary information about tissue morphology and function. However, acquiring multiple images from different modalities or contrasts is often impractical or impossible due to various factors such as scan time, cost, and patient comfort. Medical image translation has emerged as a promising solution to synthesize target-modality images given source-modality images. Ability to synthesize unavailable images enhance the ubiquity and utility of multi-modal protocols while decreasing examination costs and toxicity exposure such as ionizing radiation and contrast agents. Existing medical image translation methods prominently rely on generative adversarial networks (GANs) with convolutional neural networks (CNNs) backbones. CNNs are designed to perform local processing with compact filters, and this inductive bias is prone to limited contextual sensitivity. Meanwhile, GANs suffer from limited sample fidelity and diversity due to one-shot sampling and implicit characterization of the image distribution. To overcome the challenges with CNN based GAN models, in this thesis, first ResViT was introduced that leverages novel aggregated residual transformer (ART) blocks that synergistically fuse representations from convolutional and transformer modules. Then SynDiff is introduced, a conditional diffusion model that progressively maps noise and source images onto the target image via large diffusion steps and adversarial projections, capturing a direct correlate of the image distribution and improving sample quality and speed. ResViT provides a unified implementation to avoid the need to rebuild separate synthesis models for varying source-target modality configurations, whereas SynDiff enables unsupervised training on unpaired datasets via a cycle-consistent architecture. ResViT and SynDiff was demonstrated on synthesizing missing sequences in multi-contrast MRI, and CT images from MRI, and their state-of-the-art performance in medical image translation was shown.
dc.description.degreeM.S.
dc.description.statementofresponsibilityby Onat Dalmaz
dc.format.extentxxii, 116 leaves : illustrations, charts ; 30 cm.
dc.identifier.itemidB162289
dc.identifier.urihttps://hdl.handle.net/11693/112585
dc.language.isoEnglish
dc.publisherBilkent University
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectMulti-modal
dc.subjectMedical image synthesis
dc.subjectDeep learning
dc.subjectTransformer
dc.subjectDiffusion models
dc.titleNovel deep learning algorithms for multi-modal medical image synthesis
dc.title.alternativeÇok-kipli tıbbi görüntü sentezi için yeni derin öğrenme algoritmaları
dc.typeThesis

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
B162289.pdf
Size:
1.67 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: