Browsing by Author "Shahdloo, Mohammad"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access Attentional modulation of hierarchical speech representations in a multitalker environment(Oxford University Press, 2021-11) Kiremitçi, İbrahim; Yılmaz, Özgür; Çelik, Emin; Shahdloo, Mohammad; Huth, A. G.; Çukur, TolgaHumans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.Item Open Access Biased competition in semantic representation during natural visual search(Elsevier, 2020) Shahdloo, Mohammad; Çelik, Emin; Çukur, TolgaHumans divide their attention among multiple visual targets in daily life, and visual search can get more difficult as the number of targets increases. The biased competition hypothesis (BC) has been put forth as an explanation for this phenomenon. BC suggests that brain responses during divided attention are a weighted linear combination of the responses during search for each target individually. This combination is assumed to be biased by the intrinsic selectivity of cortical regions. Yet, it is unknown whether attentional modulation of semantic representations are consistent with this hypothesis when viewing cluttered, dynamic natural scenes. Here, we investigated whether BC accounts for semantic representation during natural category-based visual search. Subjects viewed natural movies, and their whole-brain BOLD responses were recorded while they attended to “humans”, “vehicles” (i.e. single-target attention tasks), or “both humans and vehicles” (i.e. divided attention) in separate runs. We computed a voxelwise linearity index to assess whether semantic representation during divided attention can be modeled as a weighted combination of representations during the two single-target attention tasks. We then examined the bias in weights of this linear combination across cortical ROIs. We find that semantic representations of both target and nontarget categories during divided attention are linear to a substantial degree, and that they are biased toward the preferred target in category-selective areas across ventral temporal cortex. Taken together, these results suggest that the biased competition hypothesis is a compelling account for attentional modulation of semantic representations.Item Open Access Biased competition in semantic representations across the human brain during category-based visual search(Bilkent University, 2017-01) Shahdloo, MohammadHumans can perceive thousands of distinct object and action categories in the visual scene and successfully divide their attention among multiple target categories. It has been shown that object and action categories are represented in a continuous semantic map across the cortical surface and attending to a specific category causes broad shifts in voxel-wise semantic tuning profiles to enhance the representation of the target category. However, the effects of divided attention to multiple categories on semantic representation remain unclear. In line with predictions of the biased-competition model, recent evidence suggests that brain response to two objects presented simultaneously can be described as a weighted average of the responses to individual objects presented in isolation, and that attention biases these weights in favor of the target object. We question whether this biased-competition hypothesis can also account for attentional modulation of semantic representations. To address this question, we recorded participants’ BOLD responses while they performed category-based search in natural movies that contained 831 distinct objects and actions. Three different tasks were used: search for “humans”, search for “vehicles”, and search for “both humans and vehicles” (i.e. divided attention). Voxel-wise category models were fit separately under each task, and voxel-wise semantic tuning profiles were then obtained using a principal components analysis on the model weights. Semantic tuning profiles were compared across the single-target tasks and the divided-attention task. We find that in higher visual cortex a substantial portion of semantic tuning during divided attention can be expressed as a weighted average of the tuning profiles during attention to single targets. We also find that semantic tuning in categoryselective regions is biased towards the preferred object category. Overall, these results suggest that the biased-competition theory accounts for attentional modulation of semantic representations during natural visual search.Item Open Access Optimization and machine learning in MRI: applications in rapid MR image reconstruction and encoding models of cortical representations(Bilkent University, 2020-02) Shahdloo, MohammadMagnetic Resonance Imaging (MRI) is a non-invasive medical imaging modality that is widely used by clinicians and researchers to picture body anatomy and neuronal function. However, long scan time remains a major problem. Recently, multiple techniques have emerged that reduce the acquired MRI signal samples, hence dramatically accelerating the acquisition. These techniques involve sophisticated signal reconstruction procedures that in essence require solving regularized optimization problems, and clinical adoption of accelerated MRI critically relies on self-tuning solutions for these problems. Further to this, recent experimental approaches in cognitive neuroscience favor employing naturalistic audio-visual stimuli that closely resemble humans’ daily-life experience. Yet, these modern paradigms inevitably lead to huge functional MRI (fMRI) datasets that require advanced statistical and computational techniques to uncover the large amount of embedded information. Here, we propose a novel efficient datadriven self-tuning reconstruction method for accelerated MRI. We demonstrate superior performance of the proposed method across various simulated and in vivo datasets and under various scan configurations. Furthermore, we develop statistical analysis tools to investigate the neural representation of hundreds of action categories in natural movies in the brain via fMRI, and study their attentional modulations. Finally, we develop a model-based framework to estimate temporal extent of semantic information integration in the brain, and investigate its attentional modulations using fMRI data recorded during natural story listening. In short, the methodological and analytical approaches introduced in this thesis greatly benefit clinical utility of accelerated MRI, and enhance our understanding of brain function in daily life.Item Open Access Prior-Guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks(IEEE, 2020) Dar, Salman U.H.; Yurt, Mahmut; Shahdloo, Mohammad; Ildız, Muhammed Emrullah; Tınaz, Berk; Çukur, TolgaMulti-contrast MRI acquisitions of an anatomy enrich the magnitude of information available for diagnosis. Yet, excessive scan times associated with additional contrasts may be a limiting factor. Two mainstream frameworks for enhanced scan efficiency are reconstruction of undersampled acquisitions and synthesis of missing acquisitions. Recently, deep learning methods have enabled significant performance improvements in both frameworks. Yet, reconstruction performance decreases towards higher acceleration factors with diminished sampling density at high-spatial-frequencies, whereas synthesis can manifest artefactual sensitivity or insensitivity to image features due to the absence of data samples from the target contrast. In this article, we propose a new approach for synergistic recovery of undersampled multi-contrast acquisitions based on conditional generative adversarial networks. The proposed method mitigates the limitations of pure learning-based reconstruction or synthesis by utilizing three priors: shared high-frequency prior available in the source contrast to preserve high-spatial-frequency details, low-frequency prior available in the undersampled target contrast to prevent feature leakage/loss, and perceptual prior to improve recovery of high-level features. Demonstrations on brain MRI datasets from healthy subjects and patients indicate the superior performance of the proposed method compared to pure reconstruction and synthesis methods. The proposed method can help improve the quality and scan efficiency of multi-contrast MRI exams.Item Open Access Projection onto epigraph sets for rapid self-tuning compressed sensing MRI(IEEE, 2019) Shahdloo, Mohammad; Ilıcak, Efe; Tofighi, Mohammad; Sarıtaş, Emine Ülkü; Çetin, A. Enis; Çukur, TolgaThe compressed sensing (CS) framework leverages the sparsity of MR images to reconstruct from undersampled acquisitions. CS reconstructions involve one or more regularization parameters that weigh sparsity in transform domains against fidelity to acquired data. While parameter selection is critical for reconstruction quality, the optimal parameters are subject and dataset specific. Thus, commonly practiced heuristic parameter selection generalizes poorly to independent datasets. Recent studies have proposed to tune parameters by estimating the risk of removing significant image coefficients. Line searches are performed across the parameter space to identify the parameter value that minimizes this risk. Although effective, these line searches yield prolonged reconstruction times. Here, we propose a new self-tuning CS method that uses computationally efficient projections onto epigraph sets of the ℓ1 and total-variation norms to simultaneously achieve parameter selection and regularization. In vivo demonstrations are provided for balanced steady-state free precession, time-of-flight, and T1-weighted imaging. The proposed method achieves an order of magnitude improvement in computational efficiency over line-search methods while maintaining near-optimal parameter selection.