Korkmaz, YılmazÖzbey, MuzafferÇukur, TolgaHaq, NandineeJohnson, PatriciaMaier, AndreasQin, ChenWürfl, TobiasYoo, Jaejun2023-02-152023-02-152022-09-22978-3-031-17246-5http://hdl.handle.net/11693/111377Conference Name: 5th International Workshop on Machine Learning for Medical Reconstruction, MLMIR 2022Date of Conference: 22 September 2022Deep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.EnglishAttentionGenerativeMRI ReconstructionTransformerMRI reconstruction with conditional adversarial transformersConference Paper10.1007/978-3-031-17247-2_7978-3-031-17247-2