Browsing by Subject "Attention"
Now showing 1 - 20 of 23
Results Per Page
Sort Options
Item Open Access Attended end-to-end architecture for age estimation from facial expression videos(IEEE, 2020) Pei, W.; Dibeklioğlu, Hamdi; Baltrušaitis, T.The main challenges of age estimation from facial expression videos lie not only in the modeling of the static facial appearance, but also in the capturing of the temporal facial dynamics. Traditional techniques to this problem focus on constructing handcrafted features to explore the discriminative information contained in facial appearance and dynamics separately. This relies on sophisticated feature-refinement and framework-design. In this paper, we present an end-to-end architecture for age estimation, called Spatially-Indexed Attention Model (SIAM), which is able to simultaneously learn both the appearance and dynamics of age from raw videos of facial expressions. Specifically, we employ convolutional neural networks to extract effective latent appearance representations and feed them into recurrent networks to model the temporal dynamics. More importantly, we propose to leverage attention models for salience detection in both the spatial domain for each single image and the temporal domain for the whole video as well. We design a specific spatially-indexed attention mechanism among the convolutional layers to extract the salient facial regions in each individual image, and a temporal attention layer to assign attention weights to each frame. This two-pronged approach not only improves the performance by allowing the model to focus on informative frames and facial areas, but it also offers an interpretable correspondence between the spatial facial regions as well as temporal frames, and the task of age estimation. We demonstrate the strong performance of our model in experiments on a large, gender-balanced database with 400 subjects with ages spanning from 8 to 76 years. Experiments reveal that our model exhibits significant superiority over the state-of-the-art methods given sufficient training data.Item Open Access Attentional modulations of audiovisual interactions in apparent motion: Temporal ventriloquism effects on perceived visual speed(Springer New York LLC, 2022-08-22) Duyar, Aysun; Pavan, AndreaThe timing of brief stationary sounds has been shown to alter different aspects of visual motion, such as speed estimation. These effects of auditory timing have been explained by temporal ventriloquism and auditory dominance over visual information in the temporal domain. Although previous studies provide unprecedented evidence for the multisensory nature of speed estimation, how attention is involved in these audiovisual interactions remains unclear. Here, we aimed to understand the effects of spatial attention on these audiovisual interactions in time. We utilized a set of audiovisual stimuli that elicit temporal ventriloquism in visual apparent motion and asked participants to perform a speed comparison task. We manipulated attention either in the visual or auditory domain and systematically changed the number of moving objects in the visual field. When attention was diverted to a stationary object in the visual field via a secondary task, the temporal ventriloquism effects on perceived speed decreased. On the other hand, focusing attention on the auditory stimuli facilitated these effects consistently across different difficulty levels of secondary auditory task. Moreover, the effects of auditory timing on perceived speed did not change with the number of moving objects and existed in all the experimental conditions. Taken together, our findings revealed differential effects of allocating attentional resources in the visual and auditory domains. These behavioral results also demonstrate that reliable temporal ventriloquism effects on visual motion can be induced even in the presence of multiple moving objects in the visual field and under different perceptual load conditions.Item Open Access Behavioral and neural investigation on the effect of spatial attention on surround suppression(2023-09) Kınıklıoğlu, MerveWhen a visual stimulus is presented together with other stimuli surrounding it, behavioral sensitivity and neural responses may change, often reduce, compared to when the same stimulus is presented alone. This is commonly referred to as center-surround interaction or surround suppression, and it is one of the most fundamental mechanisms in biological vision. It is well documented that in motion perception, center-surround interaction is affected by the size and contrast of the stimulus. As the size of a drifting grating increases, motion direction discrimination performance, as well as neural activity in one of the main cortical motion processing areas, medial temporal complex (MT+), decreases if the grating has high contrast (surround suppression). Whereas, when the size increases within certain limits, both the discrimination performance and the neural activity in MT+ may increase if the grating has low contrast (surround facilitation). On the other hand, spatial attention is known to modulate surround suppression both in humans and non-human animals with static stimuli. No previous study, how-ever, has directly and systematically investigated the effect of the spatial extent of attention on surround suppression in human motion perception. The studies presented in this dissertation aim to investigate the effect of the extent of spatial attention on center-surround interaction in visual motion processing. In our experiments, we used two attention conditions and a novel stimulus design, where a ‘center’ and a ‘surround’ drifting grating were presented to the participants. Under one of the attention conditions, which we call the ‘narrow-attention’ condition, participants performed a task that limited their attention to the central part of the stimulus. Under the other attention condition, which we call the ‘wide-attention’ condition, participants performed tasks that required them to extend their attention to both the center and surround gratings. Using this experimental paradigm, we measured motion direction discrimination thresholds behaviorally and cortical activity with fMRI. Behaviorally, we found increased thresholds, that is, stronger surround suppression, under the wide attention condition. In the hu-man homolog of MT+ (hMT+), we found that increasing the spatial extent of attention leads to reduced cortical responses, that is, to stronger neural suppression. This was not the case for the activity in the primary visual cortex (V1). Finally, we show that a parsimonious computational model that incorporates spatial attention and response normalization can successfully predict the response patterns in hMT+ and V1. Furthermore, the model could provide a link between cortical responses and behavioral thresholds. Overall, our findings and analyses showed that the behavioral effect can be successfully predicted by hMT+ activity. These results reveal the critical role of spatial attention on surround suppression, namely that surround suppression in motion perception becomes stronger with a wider attention field, and reveal possible cortical mechanisms underpinning the effect.Item Open Access Biased competition in semantic representation during natural visual search(Elsevier, 2020) Shahdloo, Mohammad; Çelik, Emin; Çukur, TolgaHumans divide their attention among multiple visual targets in daily life, and visual search can get more difficult as the number of targets increases. The biased competition hypothesis (BC) has been put forth as an explanation for this phenomenon. BC suggests that brain responses during divided attention are a weighted linear combination of the responses during search for each target individually. This combination is assumed to be biased by the intrinsic selectivity of cortical regions. Yet, it is unknown whether attentional modulation of semantic representations are consistent with this hypothesis when viewing cluttered, dynamic natural scenes. Here, we investigated whether BC accounts for semantic representation during natural category-based visual search. Subjects viewed natural movies, and their whole-brain BOLD responses were recorded while they attended to “humans”, “vehicles” (i.e. single-target attention tasks), or “both humans and vehicles” (i.e. divided attention) in separate runs. We computed a voxelwise linearity index to assess whether semantic representation during divided attention can be modeled as a weighted combination of representations during the two single-target attention tasks. We then examined the bias in weights of this linear combination across cortical ROIs. We find that semantic representations of both target and nontarget categories during divided attention are linear to a substantial degree, and that they are biased toward the preferred target in category-selective areas across ventral temporal cortex. Taken together, these results suggest that the biased competition hypothesis is a compelling account for attentional modulation of semantic representations.Item Open Access Biased competition in semantic representations across the human brain during category-based visual search(Bilkent University, 2017-01) Shahdloo, MohammadHumans can perceive thousands of distinct object and action categories in the visual scene and successfully divide their attention among multiple target categories. It has been shown that object and action categories are represented in a continuous semantic map across the cortical surface and attending to a specific category causes broad shifts in voxel-wise semantic tuning profiles to enhance the representation of the target category. However, the effects of divided attention to multiple categories on semantic representation remain unclear. In line with predictions of the biased-competition model, recent evidence suggests that brain response to two objects presented simultaneously can be described as a weighted average of the responses to individual objects presented in isolation, and that attention biases these weights in favor of the target object. We question whether this biased-competition hypothesis can also account for attentional modulation of semantic representations. To address this question, we recorded participants’ BOLD responses while they performed category-based search in natural movies that contained 831 distinct objects and actions. Three different tasks were used: search for “humans”, search for “vehicles”, and search for “both humans and vehicles” (i.e. divided attention). Voxel-wise category models were fit separately under each task, and voxel-wise semantic tuning profiles were then obtained using a principal components analysis on the model weights. Semantic tuning profiles were compared across the single-target tasks and the divided-attention task. We find that in higher visual cortex a substantial portion of semantic tuning during divided attention can be expressed as a weighted average of the tuning profiles during attention to single targets. We also find that semantic tuning in categoryselective regions is biased towards the preferred object category. Overall, these results suggest that the biased-competition theory accounts for attentional modulation of semantic representations during natural visual search.Item Open Access Border ownership selectivity in human early visual cortex and its modulation by attention(Society for Neuroscience, 2009) Fang, F.; Boyacı, Hüseyin; Kersten, D.Natural images are usually cluttered because objects occlude one another. A critical aspect of recognizing these visual objects is to identify the borders between image regions that belong to different objects. However, the neural coding of border ownership in human visual cortex is largely unknown. In this study, we designed two simple but compelling stimuli in which a slight change of contextual information could induce a dramatic change of border ownership. Using functional MRI adaptation, we found that border ownership selectivity in V2 was robust and reliable across subjects, and it was largely dependent on attention. Our study provides the first human evidence that V2 is a critical area for the processing of border ownership and that this processing depends on the modulation from higher-level cortical areas.Item Open Access Comparing the response modulation hypothesis and the integrated emotions system theory: the role of top-down attention in psychopathy(Elsevier, 2018) Munneke, Jaap; Hoppenbrouwers, S. S.; Little, B.; Kooiman, K.; van der Burg, E.; Theeuwes, J.Objective Two major etiological theories on psychopathy propose different mechanisms as to how emotional facial expressions are processed by individuals with elevated psychopathic traits. The Response Modulation Hypothesis (RMH) proposes that psychopathic individuals show emotional deficits as a consequence of attentional deployment, suggesting that emotional deficits are situation-specific. The Integrated Emotions System theory (IES) suggests that psychopathic individuals have a fundamental amygdala dysfunction which precludes adequate responsiveness to the distress of others. Methods Participants performed a visual search task in which they had to find a male target face among two female distractor faces. Top-down attentional set was manipulated by having participants either respond to the face's orientation, or its emotional expression. Results When emotion was task-relevant, the low-scoring psychopathy group showed attentional capture by happy and fearful distractor faces, whereas the elevated group showed capture by fearful, but not happy distractor faces. Conclusion This study provides evidence for the RMH such that top-down attention influences the way emotional faces attract attention in individuals with elevated psychopathic traits. However, the different response patterns for happy and fearful faces suggest that top-down attention may not determine the processing of all types of emotional facial expressions in psychopathy.Item Open Access Cortical processes underlying attentional modulations of dynamic vision(Bilkent University, 2022-09) Çatak, Esra NurVisual attention is one of the most fundamental cognitive functions guiding and influencing a various number of processes. However, how different neural mechanisms are modulated by selective attention to process information is still subject to debate. Utilizing electroencephalography (EEG), the current thesis focused on understanding the time course of visual information processing and its neural underpinnings with paradigms that operate in different attentional modes, such as visual masking, attentional load, and transparent motion design. First, we aimed to understand the role of spatial attention in information processing and its possible interactions with metacontrast masking mechanisms. The behavioral results revealed an interaction effect that suggests differential effects of spatial attention on metacontrast masking. The following EEG analyses revealed significant activation due to masking and attentional load on early negative components located over occipital and parieto-occipital scalp sites, followed by a late positive component centered over centro-parietal electrodes. These findings suggest that the effect of spatial attention may have distinct characteristics at different stages of sensory and perceptual processing regarding its relationship with metacontrast masking. Secondly, by employing a novel variant of transparent motion design with color and motion swapping, we aimed to isolate the object-based cueing effect from a possible feature-based explanation in both psychophysical measures and neural activities. Our results demonstrate that the behavioral effects of attentional cueing survived feature swaps, providing evidence for an object-based attention mechanism. We also observed event-related potential correlates of these object-based selection effects in the late N1 component range, over occipital and parieto-occipital scalp sites, significantly associated with the variation in behavioral performance. Our findings provide the first evidence of the role of the N1 component in object-based attention in this transparent-motion design under conditions that rule out possible feature-based explanations. Taken together, the present results highlight the substantial effects of selective attention on the processing of visual information after the initial entry of information into the visual system and before the completion of its processing.Item Open Access Deep MRI reconstruction with generative vision transformer(Springer, 2021) Korkmaz, Yılmaz; Yurt, Mahmut; Dar, Salman Ul Hassan; Özbey, Muzaffer; Çukur, TolgaSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).Item Open Access Do robots distract us as much as humans? The effect of human-like appearance and perceptual load(IEEE Computer Society, 2020) Ürgen, Burcu A.; Yılmaz, Selin; Güneysu, İlayda; Cerrahoğlu, Begüm; Dinçer, EceAttention is an important mechanism for solving certain tasks, but our environment can distract us via irrelevant information. As robots increasingly become part of our lives, one important question is whether they could distract us as much as humans do, and if so to what extent. To address this question, we conducted a study in which subjects were engaged in a central letter detection task. The task irrelevant distractors were pictures of three agents; a mechanical robot, a human-like robot, and a real human. We also manipulated the perceptual load to investigate whether the demands of the task influence how much these agents distract us. Our results show that robots distract people as much as humans, as demonstrated by significant increase in reaction times and decrease in task accuracy in the presence of agent distractors as compared to the situation when there was no distractor. However, we found that the task difficulty interacted with the humanlikeness of the distractor agent. When the task was less demanding, the agent that distracted most was the most humanlike agent, whereas when the task was more demanding, the least human-like agent distracted the most. These results not only provide insights about how to design humanoid robots but also sets as a great example of a fruitful collaboration between humanrobot interaction and cognitive sciences.Item Open Access edaGAN: Encoder-Decoder Attention Generative Adversarial Networks for multi-contrast MR image synthesis(Institute of Electrical and Electronics Engineers, 2022-05-16) Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Çukur, TolgaMagnetic resonance imaging (MRI) is the preferred modality among radiologists in the clinic due to its superior depiction of tissue contrast. Its ability to capture different contrasts within an exam session allows it to collect additional diagnostic information. However, such multi-contrast MRI exams take a long time to scan, resulting in acquiring just a portion of the required contrasts. Consequently, synthetic multi-contrast MRI can improve subsequent radiological observations and image analysis tasks like segmentation and detection. Because of this significant potential, multi-contrast MRI synthesis approaches are gaining popularity. Recently, generative adversarial networks (GAN) have become the de facto choice for synthesis tasks in medical imaging due to their sensitivity to realism and high-frequency structures. In this study, we present a novel generative adversarial approach for multi-contrast MRI synthesis that combines the learning of deep residual convolutional networks and spatial modulation introduced by an attention gating mechanism to synthesize high-quality MR images. We show the superiority of the proposed approach against various synthesis models on multi-contrast MRI datasets.Item Open Access Fearful faces do not lead to faster attentional deployment in individuals with elevated psychopathic traits(Springer New York LLC, 2017) Hoppenbrouwers, S. S.; Munneke, Jaap; Kooiman, K. A.; Little, B.; Neumann, C. S.; Theeuwes, J.In the current study, a gaze-cueing experiment (similar to Dawel et al. 2015) was conducted in which the predictivity of a gaze-cue was manipulated (non-predictive vs highly predictive). This was done to assess the degree to which individuals with elevated psychopathic traits can use contextual information (i.e., the predictivity of the cue). Psychopathic traits were measured with the Self-Report Psychopathy Scale-Short Form (SRP-SF) in a mixed sample (undergraduate students and community members). Results showed no group difference in reaction times between high and non-predictive cueing blocks, suggesting that individuals with elevated psychopathic traits can indeed use contextual information when it is relevant. In addition, we observed that fearful facial expressions did not lead to a change in reaction times in individuals with elevated psychopathic traits, whereas individuals with low psychopathic traits showed speeded responses when confronted with a fearful face, compared to a neutral face. This suggests that fearful faces do not lead to faster attentional deployment in individuals with elevated psychopathic traits. © 2017, The Author(s).Item Open Access Focal modulation network for lung segmentation in chest X-ray images(2023-08-09) Öztürk, Şaban; Çukur, TolgaSegmentation of lung regions is of key importance for the automatic analysis of Chest X-Ray (CXR) images, which have a vital role in the detection of various pulmonary diseases. Precise identification of lung regions is the basic prerequisite for disease diagnosis and treatment planning. However, achieving precise lung segmentation poses significant challenges due to factors such as variations in anatomical shape and size, the presence of strong edges at the rib cage and clavicle, and overlapping anatomical structures resulting from diverse diseases. Although commonly considered as the de-facto standard in medical image segmentation, the convolutional UNet architecture and its variants fall short in addressing these challenges, primarily due to the limited ability to model long-range dependencies between image features. While vision transformers equipped with self-attention mechanisms excel at capturing long-range relationships, either a coarse-grained global self-attention or a fine-grained local self-attention is typically adopted for segmentation tasks on high-resolution images to alleviate quadratic computational cost at the expense of performance loss. This paper introduces a focal modulation UNet model (FMN-UNet) to enhance segmentation performance by effectively aggregating fine-grained local and coarse-grained global relations at a reasonable computational cost. FMN-UNet first encodes CXR images via a convolutional encoder to suppress background regions and extract latent feature maps at a relatively modest resolution. FMN-UNet then leverages global and local attention mechanisms to model contextual relationships across the images. These contextual feature maps are convolutionally decoded to produce segmentation masks. The segmentation performance of FMN-UNet is compared against state-of-the-art methods on three public CXR datasets (JSRT, Montgomery, and Shenzhen). Experiments in each dataset demonstrate the superior performance of FMN-UNet against baselines.Item Open Access Handling of online information by users: evidence from TED talks(Taylor & Francis, 2019-02-27) Özmen, M. U.; Yücel, ErayThis paper studies how people search for, choose, process and evaluate information provided online. In this context, the study analyses how the content and context of online information are related to the length of information and to user ratings. Employing naturalistic data that cover the titles, durations and viewer-assigned ratings/tags of more than two-thousand TED talks, the paper investigates whether (i) the talk duration is related to viewer-assigned ratings, (ii) there is a link between the talk duration and attention driving factors (title words), and (iii) the ex-ante wording of talks’ titles and ex-post user-assigned ratings are connected. The findings show that talks with certain end-user ratings have significantly different length, most strikingly, talks first rated as persuasive are on average 35% longer than talks first rated as ingenious. Also the inclusion of certain words in the talk title significantly affects both the talk duration and end-user ratings. For instance, talks whose title include ‘child’ are on average 27% longer than other talks; or talks whose title include ‘brain’ are 57% more likely to be rated as fascinating than others. Overall, the paper reveals regularities regarding information processing attitudes, attention and subjective evaluations of online information users.Item Open Access Increasing the spatial extent of attention strengthens surround suppression(Elsevier Ltd, 2022-10) Kınıklıoğlu, Merve; Boyacı, HüseyinHere we investigate how the extent of spatial attention affects center-surround interaction in visual motion processing. To do so, we measured motion direction discrimination thresholds in humans using drifting gratings and two attention conditions. Participants were instructed to limit their attention to the central part of the stimulus under the narrow attention condition, and to both central and surround parts under the wide attention condition. We found stronger surround suppression under the wide attention condition. The magnitude of the attention effect increased with the size of the surround when the stimulus had low contrast, but did not change when it had high contrast. Results also showed that attention had a weaker effect when the center and surround gratings drifted in opposite directions. Next, to establish a link between the behavioral results and the neuronal response characteristics, we performed computer simulations using the divisive normalization model. Our simulations showed that using smaller versus larger multiplicative attentional gain and parameters derived from the medial temporal (MT) area of the cortex, the model can successfully predict the observed behavioral results. These findings reveal the critical role of spatial attention on surround suppression and establish a link between neuronal activity and behavior. Further, these results also suggest that the reduced surround suppression found in certain clinical disorders (e.g., schizophrenia and autism spectrum disorder) may be caused by abnormal attention mechanisms.Item Open Access MRI reconstruction with conditional adversarial transformers(Springer Cham, 2022-09-22) Korkmaz, Yılmaz; Özbey, Muzaffer; Çukur, Tolga; Haq, Nandinee; Johnson, Patricia; Maier, Andreas; Qin, Chen; Würfl, Tobias; Yoo, JaejunDeep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.Item Open Access Oscillatory synchronization model of attention to moving objects(Elsevier, 2012) Yilmaz, O.The world is a dynamic environment hence it is important for the visual system to be able to deploy attention on moving objects and attentively track them. Psychophysical experiments indicate that processes of both attentional enhancement and inhibition are spatially focused on the moving objects; however the mechanisms of these processes are unknown. The studies indicate that the attentional selection of target objects is sustained via a feedforward-feedback loop in the visual cortical hierarchy and only the target objects are represented in attention-related areas. We suggest that feedback from the attention-related areas to early visual areas modulates the activity of neurons; establishes synchronization with respect to a common oscillatory signal for target items via excitatory feedback, and also establishes de-synchronization for distractor items via inhibitory feedback. A two layer computational neural network model with integrate-and-fire neurons is proposed and simulated for simple attentive tracking tasks. Consistent with previous modeling studies, we show that via temporal tagging of neural activity, distractors can be attentively suppressed from propagating to higher levels. However, simulations also suggest attentional enhancement of activity for distractors in the first layer which represents neural substrate dedicated for low level feature processing. Inspired by this enhancement mechanism, we developed a feature based object tracking algorithm with surround processing. Surround processing improved tracking performance by 57% in PETS 2001 dataset, via eliminating target features that are likely to suffer from faulty correspondence assignments. © 2012 Elsevier Ltd.Item Open Access Perceptual averaging in individuals with autism spectrum disorder(Frontiers Research Foundation, 2016) Corbett,, Jennifer Elise; Venuti, P.; Melcher, D.There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task) despite poor accuracy in recalling individual circle sizes (member task). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment. © 2016 Corbett, Venuti and Melcher.Item Open Access Retinotopic sensitisation to spatial scale: evidence for flexible spatial frequency processing in scene perception(Elsevier Ltd., 2006) Ozgen, E.; Payne, H. E.; Sowden, P. T.; Schyns, P. G.Observers can use spatial scale information flexibly depending on categorisation task and on their prior sensitisation. Here, we explore whether attentional modulation of spatial frequency processing at early stages of visual analysis may be responsible. In three experiments, we find that observers' perception of spatial frequency (SF) band-limited scene stimuli is determined by the SF content of images previously experienced at that location during a sensitisation phase. We conclude that these findings are consistent with the involvement of relatively early, retinotopically mapped, stages of visual analysis, supporting the attentional modulation of spatial frequency channels account of sensitisation effects.Item Open Access Set similarity modulates object tracking in dynamic environments(Springer New York LLC, 2018) Akyuz, Sibel; Munneke, J.; Corbett, J. E.Based on the observation that sports teams rely on colored jerseys to define group membership, we examined how grouping by similarity affected observers’ abilities to track a “ball” target passed between 20 colored circle “players” divided into two color “teams” of 10 players each, or five color teams of four players each. Observers were more accurate and exerted less effort (indexed by pupil diameter) when their task was to count the number of times any player gained possession of the ball versus when they had to count only the possessions by a given color team, especially when counting the possessions of one team when players were grouped into fewer teams of more individual members each. Overall, results confirm previous reports of costs for segregating a larger set into smaller subsets and suggest that grouping by similarity facilitates processing at the set level.