Browsing by Subject "FMRI"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Attentional modulation of hierarchical speech representations in a multitalker environment(Oxford University Press, 2021-11) Kiremitçi, İbrahim; Yılmaz, Özgür; Çelik, Emin; Shahdloo, Mohammad; Huth, A. G.; Çukur, TolgaHumans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.Item Open Access Biased competition in semantic representations across the human brain during category-based visual search(2017-01) Shahdloo, MohammadHumans can perceive thousands of distinct object and action categories in the visual scene and successfully divide their attention among multiple target categories. It has been shown that object and action categories are represented in a continuous semantic map across the cortical surface and attending to a specific category causes broad shifts in voxel-wise semantic tuning profiles to enhance the representation of the target category. However, the effects of divided attention to multiple categories on semantic representation remain unclear. In line with predictions of the biased-competition model, recent evidence suggests that brain response to two objects presented simultaneously can be described as a weighted average of the responses to individual objects presented in isolation, and that attention biases these weights in favor of the target object. We question whether this biased-competition hypothesis can also account for attentional modulation of semantic representations. To address this question, we recorded participants’ BOLD responses while they performed category-based search in natural movies that contained 831 distinct objects and actions. Three different tasks were used: search for “humans”, search for “vehicles”, and search for “both humans and vehicles” (i.e. divided attention). Voxel-wise category models were fit separately under each task, and voxel-wise semantic tuning profiles were then obtained using a principal components analysis on the model weights. Semantic tuning profiles were compared across the single-target tasks and the divided-attention task. We find that in higher visual cortex a substantial portion of semantic tuning during divided attention can be expressed as a weighted average of the tuning profiles during attention to single targets. We also find that semantic tuning in categoryselective regions is biased towards the preferred object category. Overall, these results suggest that the biased-competition theory accounts for attentional modulation of semantic representations during natural visual search.Item Open Access Contrast affects fMRI activity in middle temporal cortex related to center-surround interaction in motion perception(Frontiers Research Foundation, 2016) Türkozer, Halide B.; Pamir, Zahide; Boyacı, HüseyinAs the size of a high contrast drifting Gabor patch increases, perceiving its direction of motion becomes harder. However, the same behavioral effect is not observed for a low contrast Gabor patch. Neuronal mechanisms underlying this size-contrast interaction are not well understood. Here using psychophysical methods and functional magnetic resonance imaging (fMRI), we investigated the neural correlates of this behavioral effect. In the behavioral experiments, motion direction discrimination thresholds were assessed for drifting Gabor patches with different sizes and contrasts. Thresholds increased significantly as the size of the stimulus increased for high contrast (65%) but did not change for low contrast (2%) stimuli. In the fMRI experiment, cortical activity was recorded while observers viewed drifting Gabor patches with different contrasts and sizes. We found that the activity in middle temporal (MT) area increased with size at low contrast, but did not change at high contrast. Taken together, our results show that MT activity reflects the size-contrast interaction in motion perception. © 2016 Turkozer, Pamir and Boyaci.Item Open Access Distinct representations in occipito-temporal, parietal, and premotor cortex during action perception revealed by fMRI and computational modeling(Elsevier, 2019) Ürgen, Burcu A.; Pehlivan, S.; Saygın, A.Visual processing of actions is supported by a network consisting of occipito-temporal, parietal, and premotor regions in the human brain, known as the Action Observation Network (AON). In the present study, we investigate what aspects of visually perceived actions are represented in this network using fMRI and computational modeling. Human subjects performed an action perception task during scanning. We characterized the different aspects of the stimuli starting from purely visual properties such as form and motion to higher-aspects such as intention using computer vision and categorical modeling. We then linked the models of the stimuli to the three nodes of the AON with representational similarity analysis. Our results show that different nodes of the network represent different aspects of actions. While occipito-temporal cortex performs visual analysis of actions by means of integrating form and motion information, parietal cortex builds on these visual representations and transforms them into more abstract and semantic representations coding target of the action, action type and intention. Taken together, these results shed light on the neuro-computational mechanisms that support visual perception of actions and provide support that AON is a hierarchical system in which increasing levels of the cortex code increasingly complex features.