Deep learning for multi-contrast MRI synthesis

buir.advisorÇukur, Tolga
dc.contributor.authorYurt, Mahmut
dc.date.accessioned2021-08-17T10:51:36Z
dc.date.available2021-08-17T10:51:36Z
dc.date.copyright2021-07
dc.date.issued2021-07
dc.date.submitted2021-08-06
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2021.en_US
dc.descriptionIncludes bibliographical references (leaves 81-97).en_US
dc.description.abstractMagnetic resonance imaging (MRI) possesses the unique versatility to acquire images under a diverse array of distinct tissue contrasts. Multi-contrast images, in turn, better delineate tissues, accumulate diagnostic information, and enhance radiological analyses. Yet, prolonged, costly exams native to multi-contrast pro-tocols often impair the diversity, resulting in missing images from some contrasts. A promising remedy against this limitation arises as image synthesis that recovers missing target contrast images from available source contrast images. Learning-based models demonstrated remarkable success in this source-to-target mapping due to their prowess in solving even the most demanding inverse problems. Main-stream approaches proposed for synthetic MRI were typically subjected to a model training to perform either one-to-one or many-to-one mapping. One-to-one models manifest elevated sensitivity to detailed features of the given source, but they perform suboptimally when source-target images are poorly linked. Meanwhile, many-to-one counterparts pool information from multiple sources, yet this comes at the expense of losing detailed features uniquely present in cer-tain sources. Furthermore, regardless of the mapping, they both innately demand large training sets of high-quality source and target images Fourier-reconstructed from Nyquist-sampled acquisitions. However, time and cost considerations put significant challenges in compiling such datasets. To address these limitations, here we first propose a novel multi-stream model that task-adaptively fuses unique and shared image features from a hybrid of multiple one-to-one streams and a single many-to-one stream. We then introduce a novel semi-supervised learning framework based on selective tensor loss functions to learn high-quality image synthesis directly from a training dataset of undersampled acquisitions, bypass-ing the undesirable data requirements of deep learning. Demonstrations on brain MRI images from healthy subjects and glioma patients indicate the superiority of the proposed approaches against state-of-the-art baselines.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2021-08-17T10:51:36Z No. of bitstreams: 1 10411492.pdf: 14855394 bytes, checksum: dc022799ca1c2d1ffae309bfe95d495d (MD5)en
dc.description.provenanceMade available in DSpace on 2021-08-17T10:51:36Z (GMT). No. of bitstreams: 1 10411492.pdf: 14855394 bytes, checksum: dc022799ca1c2d1ffae309bfe95d495d (MD5) Previous issue date: 2021-07en
dc.description.statementofresponsibilityby Mahmut Yurten_US
dc.format.extentxxiii, 97 leaves : illustrations (some color) ; 30 cm.en_US
dc.identifier.itemidB138667
dc.identifier.urihttp://hdl.handle.net/11693/76446
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectMRI synthesisen_US
dc.subjectDeep learningen_US
dc.subjectMulti-streamen_US
dc.subjectSemi-superviseden_US
dc.titleDeep learning for multi-contrast MRI synthesisen_US
dc.title.alternativeÇoklu kontrast MRG için derin öğrenmeen_US
dc.typeThesisen_US
thesis.degree.disciplineElectrical and Electronic Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
10411492.pdf
Size:
14.17 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: