MRI reconstruction with conditional adversarial transformers

buir.contributor.authorKorkmaz, Yılmaz
buir.contributor.authorÖzbey, Muzaffer
buir.contributor.authorÇukur, Tolga
buir.contributor.orcidÖzbey, Muzaffer|0000-0002-6262-8915
buir.contributor.orcidÇukur, Tolga|0000-0002-2296-851X
dc.citation.epage71en_US
dc.citation.spage62en_US
dc.citation.volumeNumber13587en_US
dc.contributor.authorKorkmaz, Yılmaz
dc.contributor.authorÖzbey, Muzaffer
dc.contributor.authorÇukur, Tolga
dc.contributor.editorHaq, Nandinee
dc.contributor.editorJohnson, Patricia
dc.contributor.editorMaier, Andreas
dc.contributor.editorQin, Chen
dc.contributor.editorWürfl, Tobias
dc.contributor.editorYoo, Jaejun
dc.coverage.spatialSingaporeen_US
dc.date.accessioned2023-02-15T13:58:29Z
dc.date.available2023-02-15T13:58:29Z
dc.date.issued2022-09-22
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.departmentNational Magnetic Resonance Research Center (UMRAM)en_US
dc.descriptionConference Name: 5th International Workshop on Machine Learning for Medical Reconstruction, MLMIR 2022en_US
dc.descriptionDate of Conference: 22 September 2022en_US
dc.description.abstractDeep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.en_US
dc.description.provenanceSubmitted by Evrim Ergin (eergin@bilkent.edu.tr) on 2023-02-15T13:58:29Z No. of bitstreams: 1 MRI_reconstruction_with_conditional_adversarial_transformers.pdf: 4142895 bytes, checksum: e307ff9268cc0e09c2c3b4cddbf35941 (MD5)en
dc.description.provenanceMade available in DSpace on 2023-02-15T13:58:29Z (GMT). No. of bitstreams: 1 MRI_reconstruction_with_conditional_adversarial_transformers.pdf: 4142895 bytes, checksum: e307ff9268cc0e09c2c3b4cddbf35941 (MD5) Previous issue date: 2022-09-22en
dc.identifier.doi10.1007/978-3-031-17247-2_7en_US
dc.identifier.eisbn978-3-031-17247-2
dc.identifier.isbn978-3-031-17246-5
dc.identifier.urihttp://hdl.handle.net/11693/111377
dc.language.isoEnglishen_US
dc.publisherSpringer Chamen_US
dc.relation.ispartofseriesLecture Notes in Computer Science;
dc.relation.isversionofhttps://doi.org/10.1007/978-3-031-17247-2_7en_US
dc.source.titleMachine Learning for Medical Image Reconstructionen_US
dc.subjectAttentionen_US
dc.subjectGenerativeen_US
dc.subjectMRI Reconstructionen_US
dc.subjectTransformeren_US
dc.titleMRI reconstruction with conditional adversarial transformersen_US
dc.typeConference Paperen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MRI_reconstruction_with_conditional_adversarial_transformers.pdf
Size:
3.95 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: