Deep MRI reconstruction with generative vision transformer
buir.contributor.author | Korkmaz, Yılmaz | |
buir.contributor.author | Yurt, Mahmut | |
buir.contributor.author | Dar, Salman Ul Hassan | |
buir.contributor.author | Özbey, Muzaffer | |
buir.contributor.author | Çukur, Tolga | |
buir.contributor.orcid | Çukur, Tolga|0000-0002-2296-851X | |
dc.citation.epage | 64 | en_US |
dc.citation.spage | 55 | en_US |
dc.contributor.author | Korkmaz, Yılmaz | |
dc.contributor.author | Yurt, Mahmut | |
dc.contributor.author | Dar, Salman Ul Hassan | |
dc.contributor.author | Özbey, Muzaffer | |
dc.contributor.author | Çukur, Tolga | |
dc.coverage.spatial | Strasbourg, France | en_US |
dc.date.accessioned | 2022-01-27T08:08:24Z | |
dc.date.available | 2022-01-27T08:08:24Z | |
dc.date.issued | 2021 | |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.department | National Magnetic Resonance Research Center (UMRAM) | en_US |
dc.description | Conference Name: International Workshop on Machine Learning for Medical Image Reconstruction, MLMIR 2021 | en_US |
dc.description | Date of Conference: 25 September 2021 | en_US |
dc.description.abstract | Supervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs). | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2022-01-27T08:08:24Z No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5) | en |
dc.description.provenance | Made available in DSpace on 2022-01-27T08:08:24Z (GMT). No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5) Previous issue date: 2021-09-25 | en |
dc.identifier.doi | 10.1007/978-3-030-88552-6_6 | en_US |
dc.identifier.eisbn | 978-3-030-88552-6 | |
dc.identifier.eissn | 1611-3349 | en_US |
dc.identifier.isbn | 978-3-030-88551-9 | |
dc.identifier.issn | 0302-9743 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/76822 | |
dc.language.iso | English | en_US |
dc.publisher | Springer | en_US |
dc.relation.ispartofseries | Lecture Notes in Computer Science (LNCS) | |
dc.relation.isversionof | https://doi.org/10.1007/978-3-030-88552-6_6 | en_US |
dc.source.title | Lecture Notes in Computer Science | en_US |
dc.subject | MRI reconstruction | en_US |
dc.subject | Transformer | en_US |
dc.subject | Generative | en_US |
dc.subject | Attention | en_US |
dc.subject | Unsupervised | en_US |
dc.title | Deep MRI reconstruction with generative vision transformer | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Deep_MRI_Reconstruction_with_Generative_Vision_Transformers.pdf
- Size:
- 3.12 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.69 KB
- Format:
- Item-specific license agreed upon to submission
- Description: