DreaMR: diffusion driven counterfactual explanation for functional MRI
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Citation Stats
Series
Abstract
Deep learning analyses have offered sensitivity leaps in detection of cognition-related variables from functional MRI (fMRI) measurements of brain responses. Yet, as deep models perform hierarchical nonlinear transformations on fMRI data, interpreting the association between individual brain regions and the detected variables is challenging. Among explanation approaches for deep fMRI classifiers, attribution methods show poor specificity and perturbation methods show limited sensitivity. While counterfactual generation promises to address these limitations, previous counterfactual methods based on variational or adversarial priors can yield suboptimal sample fidelity. Here, we introduce the first diffusion-driven counterfactual method, DreaMR, to enable fMRI interpretation with high fidelity. DreaMR performs diffusion-based resampling of an input fMRI sample to alter the decision of a downstream classifier, and then computes the difference between the original sample and the counterfactual sample for explanation. Unlike conventional diffusion methods, DreaMR leverages a novel fractional multi-phase-distilled diffusion prior to improve inference efficiency without compromising fidelity, and it employs a transformer architecture to account for long-range spatiotemporal context in fMRI scans. Comprehensive experiments on neuroimaging datasets demonstrate the superior fidelity and efficiency of DreaMR in sample generation over state-of-the-art counterfactual methods for fMRI explanation.