Deep MRI reconstruction with generative vision transformer

buir.contributor.authorKorkmaz, Yılmaz
buir.contributor.authorYurt, Mahmut
buir.contributor.authorDar, Salman Ul Hassan
buir.contributor.authorÖzbey, Muzaffer
buir.contributor.authorÇukur, Tolga
buir.contributor.orcidÇukur, Tolga|0000-0002-2296-851X
dc.citation.epage64en_US
dc.citation.spage55en_US
dc.contributor.authorKorkmaz, Yılmaz
dc.contributor.authorYurt, Mahmut
dc.contributor.authorDar, Salman Ul Hassan
dc.contributor.authorÖzbey, Muzaffer
dc.contributor.authorÇukur, Tolga
dc.coverage.spatialStrasbourg, Franceen_US
dc.date.accessioned2022-01-27T08:08:24Z
dc.date.available2022-01-27T08:08:24Z
dc.date.issued2021
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.departmentNational Magnetic Resonance Research Center (UMRAM)en_US
dc.descriptionConference Name: International Workshop on Machine Learning for Medical Image Reconstruction, MLMIR 2021en_US
dc.descriptionDate of Conference: 25 September 2021en_US
dc.description.abstractSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2022-01-27T08:08:24Z No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5)en
dc.description.provenanceMade available in DSpace on 2022-01-27T08:08:24Z (GMT). No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5) Previous issue date: 2021-09-25en
dc.identifier.doi10.1007/978-3-030-88552-6_6en_US
dc.identifier.eisbn978-3-030-88552-6
dc.identifier.eissn1611-3349en_US
dc.identifier.isbn978-3-030-88551-9
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/11693/76822
dc.language.isoEnglishen_US
dc.publisherSpringeren_US
dc.relation.ispartofseriesLecture Notes in Computer Science (LNCS)
dc.relation.isversionofhttps://doi.org/10.1007/978-3-030-88552-6_6en_US
dc.source.titleLecture Notes in Computer Scienceen_US
dc.subjectMRI reconstructionen_US
dc.subjectTransformeren_US
dc.subjectGenerativeen_US
dc.subjectAttentionen_US
dc.subjectUnsuperviseden_US
dc.titleDeep MRI reconstruction with generative vision transformeren_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Deep_MRI_Reconstruction_with_Generative_Vision_Transformers.pdf
Size:
3.12 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: