Browsing by Subject "Action perception"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Open Access Distinct representations in occipito-temporal, parietal, and premotor cortex during action perception revealed by fMRI and computational modeling(Elsevier, 2019) Ürgen, Burcu A.; Pehlivan, S.; Saygın, A.Visual processing of actions is supported by a network consisting of occipito-temporal, parietal, and premotor regions in the human brain, known as the Action Observation Network (AON). In the present study, we investigate what aspects of visually perceived actions are represented in this network using fMRI and computational modeling. Human subjects performed an action perception task during scanning. We characterized the different aspects of the stimuli starting from purely visual properties such as form and motion to higher-aspects such as intention using computer vision and categorical modeling. We then linked the models of the stimuli to the three nodes of the AON with representational similarity analysis. Our results show that different nodes of the network represent different aspects of actions. While occipito-temporal cortex performs visual analysis of actions by means of integrating form and motion information, parietal cortex builds on these visual representations and transforms them into more abstract and semantic representations coding target of the action, action type and intention. Taken together, these results shed light on the neuro-computational mechanisms that support visual perception of actions and provide support that AON is a hierarchical system in which increasing levels of the cortex code increasingly complex features.Item Open Access Explicit and implicit measurement of mind perception in social robots through individual differences modulation(2022-06) Saltık, İmgeThe attribution of mental states to the object or subject that an individual interacts with according to its appearance or behavior is called mind perception (Gray et al., 2007). Recent research on human-robot interaction has shown that robots can create mind perceptions like other agents under certain conditions. In addition, while the two dimensions of mind perception (Agency and Experience) are mostly controlled using explicit measurement methods in the literature, the use of implicit measurement methods in the measurement of mind perception is still almost nonexistent. In addition to this fundamental gap, studies examining mind perception in robots have investigated how appearance affects mind perception, while the effect of action perception almost again has never been observed. In this context, we investigated how robots affect mind perception by manipulating differences in action and appearance. Methodologically, we conducted our study using both the explicit measurement method and the implicit measurement method due to the gap in the literature. In this study, individual difference measurement was also used to observe the causes of different attributions in mind perception to robots. In the first study, participants (N=102) evaluated how the robots' performing different actions (biological, verbal and nonverbal communicative and neutral) and appearance (humanoid and mechanical) affect mind perception; in the second study, participants (N=185) evaluated the effect of robots' actions and appearances on mind perception in terms of implicit and explicit measurement methods. In addition, 11 individual difference measures were used to observe individual differences that modulate mind perception. Looking at the results, it has been observed in both studies that the action of robots affects mind perception. In the explicit measurement method, neutral behavior was found to create less mind perception than communicative and biological action. In the implicit measurement method, differences in reaction time were observed between communicative actions and biological \& neutral actions. Individual differences that modulate the perception of the explicit and implicit mind have been observed. According to this, intentionality of behavior, theory of mind, and perception of loneliness are core modulates for explicit mind perception, while negative mood primarily modulates implicit mind perception. Looking at the results, it was observed that the perception of action had an effect on the mind perception, the implicit and the explicit mind perception showed different patterns from each other, and the individual differences predicted the pattern of implicit and explicit mind perception.Item Open Access Neural underpinnings of biological motion perception under attentional load(2022-06) Çalışkan, Hilal NizamoğluHumans can detect and differentiate biological motion from non-biological motion stimuli effortlessly, even if the stimuli were shown as simplistic as a composition of moving dots (i.e. point-light displays [PLD]). Considering its survival and social significance, BM perception is assumed to occur automatically. Indeed, Thorn-ton and Vuong [1] showed that task-irrelevant BM in the periphery interfered with task performance at the fovea. However, the neural underpinnings of this bottom-up processing of BM lacks thorough examination in the field. Under selec-tive attention, BM perception is supported by a network of regions including the occipito-temporal, parietal, and premotor cortices. A retinotopy mapping study on BM showed distinct maps for its processing under and away from selective attention [2]. Based on these findings, we investigated how bottom-up percep-tion of BM would be processed under attentional load when it was shown away from the focus of attention as a task-irrelevant stimulus. Participants (N=31) underwent an fMRI study in which they performed an attentionally demand-ing visual detection task at the fovea while intact or scrambled PLDs of BM were shown at the periphery. Our results showed the main effect of attentional load in fronto-parietal regions; as well as, the main effect of peripheral stimuli in occipito-temporal cortex. Both univariate and multivariate pattern analysis results support the attentional load modulation on BM. Lastly, ROI results on each core node of BM processing network expanded these findings by showing that the attentional load modulation on both intact and scrambled BM stimuli were the strongest in bilateral occipito-temporal regions as compared to parietal and premotor cortices. In conclusion, BM was processed within the motion sensi-tive regions in the occipito-temporal cortex when shown away from the selective attention, and was modulated by attentional load.Item Open Access Optimization and machine learning in MRI: applications in rapid MR image reconstruction and encoding models of cortical representations(2020-02) Shahdloo, MohammadMagnetic Resonance Imaging (MRI) is a non-invasive medical imaging modality that is widely used by clinicians and researchers to picture body anatomy and neuronal function. However, long scan time remains a major problem. Recently, multiple techniques have emerged that reduce the acquired MRI signal samples, hence dramatically accelerating the acquisition. These techniques involve sophisticated signal reconstruction procedures that in essence require solving regularized optimization problems, and clinical adoption of accelerated MRI critically relies on self-tuning solutions for these problems. Further to this, recent experimental approaches in cognitive neuroscience favor employing naturalistic audio-visual stimuli that closely resemble humans’ daily-life experience. Yet, these modern paradigms inevitably lead to huge functional MRI (fMRI) datasets that require advanced statistical and computational techniques to uncover the large amount of embedded information. Here, we propose a novel efficient datadriven self-tuning reconstruction method for accelerated MRI. We demonstrate superior performance of the proposed method across various simulated and in vivo datasets and under various scan configurations. Furthermore, we develop statistical analysis tools to investigate the neural representation of hundreds of action categories in natural movies in the brain via fMRI, and study their attentional modulations. Finally, we develop a model-based framework to estimate temporal extent of semantic information integration in the brain, and investigate its attentional modulations using fMRI data recorded during natural story listening. In short, the methodological and analytical approaches introduced in this thesis greatly benefit clinical utility of accelerated MRI, and enhance our understanding of brain function in daily life.Item Open Access Predictive processing account of action perception: evidence from effective connectivity in the actionobservation network(Elsevier, 2020) Ürgen, Burcu A.; Saygın, A. P.Visual perception of actions is supported by a network of brain regions in the occipito-temporal, parietal, and premotor cortex in the primate brain, known as the Action Observation Network (AON). Although there is a growing body of research that characterizes the functional properties of each node of this network, the communication and direction of information flow between the nodes is unclear. According to the predictive coding account of action perception (Kilner, Friston, & Frith, 2007a; 2007b), this network is not a purely feedforward system but has backward connections through which prediction error signals are communicated between the regions of the AON. In the present study, we investigated the effective connectivity of the AON in an experimental setting where the human subjects' predictions about the observed agent were violated, using fMRI and Dynamical Causal Modeling (DCM). We specifically examined the influence of the lowest and highest nodes in the AON hierarchy, pSTS and ventral premotor cortex, respectively, on the middle node, inferior parietal cortex during prediction violation. Our DCM results suggest that the influence on the inferior parietal node is through a feedback connection from ventral premotor cortex during perception of actions that violate people's predictions.Item Embargo Top-down effects of attention on the action observation network(2024-07) Eroğlu, AslıAction perception is one of the fundamental skills for survival and social interaction. In neuroscience literature, it is well-established that when an action is visually perceived, the Action Observation Network (AON) becomes active in the human brain. This network has three core brain regions: the posterior superior temporal sulcus (pSTS), parietal cortex, and premotor cortex. Recent studies in neuroscience have shown that action perception is not a passive process but is affected by top-down signals such as attention. In this study, we investigated the influence of attention on the AON by conducting a two-session fMRI experiment. The stimuli comprised eight videos of pushing actions, with each video featuring a unique combination of actor (female or male), effector (hand or foot), and target (object or human). In the active fMRI session, participants viewed the videos while performing three different tasks that directed their attention to distinct features of the action (actor, effector, target). In the passive fMRI session, participants viewed the same videos without performing any task. The data from the passive session was used to extract ROIs: pSTS, parietal, and premotor in each hemisphere. Univariate analysis, representational similarity analysis (RSA), and decoding analysis were performed. Univariate analysis showed that introducing attentional demands during video viewing elicited a significant increase in neural activity within both parietal and premotor cortices relative to passive viewing conditions. RSA results revealed significant correlations between neural activity patterns and task models across all ROIs, indicating top-down influence throughout the AON. Decoding analysis showed unique top-down effects in each ROI, depending on its hierarchical level and intrinsic selectivity. These findings demonstrate strong top-down modulation of the AON based on cognitive demands, highlighting the dynamic interplay of attention and action perception.