Browsing by Subject "Voxelwise modeling"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Biased competition in semantic representation during natural visual search(Elsevier, 2020) Shahdloo, Mohammad; Çelik, Emin; Çukur, TolgaHumans divide their attention among multiple visual targets in daily life, and visual search can get more difficult as the number of targets increases. The biased competition hypothesis (BC) has been put forth as an explanation for this phenomenon. BC suggests that brain responses during divided attention are a weighted linear combination of the responses during search for each target individually. This combination is assumed to be biased by the intrinsic selectivity of cortical regions. Yet, it is unknown whether attentional modulation of semantic representations are consistent with this hypothesis when viewing cluttered, dynamic natural scenes. Here, we investigated whether BC accounts for semantic representation during natural category-based visual search. Subjects viewed natural movies, and their whole-brain BOLD responses were recorded while they attended to “humans”, “vehicles” (i.e. single-target attention tasks), or “both humans and vehicles” (i.e. divided attention) in separate runs. We computed a voxelwise linearity index to assess whether semantic representation during divided attention can be modeled as a weighted combination of representations during the two single-target attention tasks. We then examined the bias in weights of this linear combination across cortical ROIs. We find that semantic representations of both target and nontarget categories during divided attention are linear to a substantial degree, and that they are biased toward the preferred target in category-selective areas across ventral temporal cortex. Taken together, these results suggest that the biased competition hypothesis is a compelling account for attentional modulation of semantic representations.Item Open Access Effects of auditory attention on language representation across the human brain(Bilkent University, 2019-09) Yılmaz, ÖzgürHumans can effortlessly identify target auditory objects during natural listening and shift their focus between different targets. Unique allocation of brain resources would be inefficient for semantic search task. Here, we hypothesize that auditory attention shifts tuning of cortical voxels toward target category and that attention expands the representation of target words while compressing the representation of behaviorally irrelevant words across cortex. To test, we designed an fMRI experiment with a semantic search task. Subjects listened to natural stories twice while searching for words that are semantically related to either `humans' or `places'. Fit voxelwise models for two attention tasks were compared to identify semantic tuning shifts in single voxels. Results indicate that attention shifts semantic tuning of single voxels broadly across cortex and attention warps language representation in favor of target words across cortex. We also introduced a novel feature regularization in voxelwise modeling for a naturalistic movie experiment. Feature regularization simply enforces similar model weights over semantically related stimulus features. We tested the proposed method on an fMRI experiment with naturalistic movies. Results suggest that the proposed method offer improved sensitivity in modeling of single voxels. Moreover, we proposed a novel method to improve the sensitivity of phase-sensitive fatwater separation in balanced steady-state free precession (bSSFP) acquisitions. In bSSFP applications using phased-array coils, reconstructed images suffer a lot from spatial sensitivity variations within individual coils. To improve, we first performed region-growing phase correction in individual coil images, then used a linear combination of phase-corrected images. Tests on SSFP angiograms of the thigh, lower leg, and foot suggest that the proposed method enhances fat{water separation in phased-array acquisitions with improved phase estimates.Item Open Access Spatially informed voxelwise modeling and dynamic scene category representation in the human brain(Bilkent University, 2021-12) Çelik, EminHumans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. Voxelwise modeling (VM) is a powerful framework to predict single voxel responses evoked by a rich set of stimulus features present in complex natural stimuli. However, because VM disregards correlations across neighboring voxels, its sensitivity in detecting functional selectivity can be diminished in the presence of high levels of measurement noise. Here, we introduce spatially-informed voxelwise modeling (SPIN-VM) to take advantage of response correlations in spa-tial neighborhoods of voxels. To optimally utilize shared information, SPIN-VM performs regularization across spatial neighborhoods in addition to model fea-tures, while still generating single-voxel response predictions. Compared to VM, SPIN-VM yields higher prediction accuracies and better capture locally congruent information representations across cortex. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex.Item Open Access Spatially informed voxelwise modeling for naturalistic fMRI experiments(Elsevier, 2019) Çelik, Emin; Dar, Salman Ul Hassan; Yılmaz, Özgür; Keleş, Ümit; Çukur, TolgaVoxelwise modeling (VM) is a powerful framework to predict single voxel responses evoked by a rich set of stimulus features present in complex natural stimuli. However, because VM disregards correlations across neighboring voxels, its sensitivity in detecting functional selectivity can be diminished in the presence of high levels of measurement noise. Here, we introduce spatially-informed voxelwise modeling (SPIN-VM) to take advantage of response correlations in spatial neighborhoods of voxels. To optimally utilize shared information, SPIN-VM performs regularization across spatial neighborhoods in addition to model features, while still generating single-voxel response predictions. We demonstrated the performance of SPIN-VM on a rich dataset from a natural vision experiment. Compared to VM, SPIN-VM yields higher prediction accuracies and better capture locally congruent information representations across cortex. These results suggest that SPIN-VM offers improved performance in predicting single-voxel responses and recovering coherent information representations.Item Open Access Task-dependent warping of semantic representations during search for visual action categories(The Journal of Neuroscience, 2022-08-31) Shahdloo, Mo; Çelik, Emin; Urgen, Burcu A.; Gallant, J.L.; Çukur, TolgaObject and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (one female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity toward target actions and that tuning shifts are a general feature of conceptual representations in the brain.