MRI reconstruction with conditional adversarial transformers

Date
2022-09-22
Advisor
Instructor
Source Title
Machine Learning for Medical Image Reconstruction
Print ISSN
Electronic ISSN
Publisher
Springer Cham
Volume
13587
Issue
Pages
62 - 71
Language
English
Type
Conference Paper
Journal Title
Journal ISSN
Volume Title
Abstract

Deep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.

Course
Other identifiers
Book Title
Keywords
Attention, Generative, MRI Reconstruction, Transformer
Citation
Published Version (Please cite this version)