Browsing by Author "Clarke, Aaron"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Open Access Deleterious effects of roving on learned tasks(Elsevier, 2014) Clarke, Aaron; Grzeczkowski, L.; Mast, F.; Gauthier, I.; Herzog, M.In typical perceptual learning experiments, one stimulus type (e.g., a bisection stimulus offset either to the left or right) is presented per trial. In roving, two different stimulus types (e.g., a 30′ and a 20′ wide bisection stimulus) are randomly interleaved from trial to trial. Roving can impair both perceptual learning and task sensitivity. Here, we investigate the relationship between the two. Using a bisection task, we found no effect of roving before training. We next trained subjects and they improved. A roving condition applied after training impaired sensitivity.Item Open Access Distinct perceptual grouping pathways revealed by temporal carriers and envelopes(Association for Research in Vision and Ophthalmology, 2008) Rainville, S.; Clarke, AaronS. E. Guttman, L. A. Gilroy, and R. Blake (2005) investigated whether observers could perform temporal grouping in multi-element displays where each local element was stochastically modulated over time along one of several potential dimensions—or “messenger types”—such as contrast, position, orientation, or spatial scale. Guttman et al.'s data revealed that grouping discards messenger type and therefore support a single-pathway model that groups elements with similar temporal waveforms. In the current study, we carried out three experiments in which temporal-grouping information resided either in the carrier, the envelope, or the combined carrier and envelope of each messenger's timecourse. Results revealed that grouping is highly specific for messenger type if carrier envelopes lack grouping information but largely messenger nonspecific if carrier envelopes contain grouping information. These imply that temporal grouping is mediated by several messenger-specific carrier pathways as well as by a messenger-nonspecific envelope pathways. Findings also challenge simple temporal-filtering accounts of perceptual grouping (E. H. Adelson & H. Farid, 1999).Item Open Access Does spatio-temporal filtering account for nonretinotopic motion perception? Comment on Pooresmaeili, Cicchini, Morrone, and Burr (2012)(ARVO, 2013) Clarke, Aaron; Repnow, M.; Öğmen, H.; Herzog, M.Item Open Access Hemifield asymmetry in the potency of exogenous auditory and visual cues(Elsevier, 2011) Sosa, Y.; Clarke, Aaron; McCourt, M.Neurologically normal subjects misperceive the midpoints of lines (PSE) as reliably leftward of veridical center, a phenomenon known as pseudoneglect. This leftward bias reflects the dominance of the right cerebral hemisphere in deploying spatial attention. Transient visual cues, delivered to either the left or right endpoints of lines, modulate PSE such that leftward biases are increased by leftward cues, and are decreased by rightward cues, relative to a no-cue control condition. We ask whether lateralized auditory cues can similarly influence PSE in a tachistoscopic visual line bisection task, and describe how visual and auditory cues, in spatially synergistic or antagonistic combinations, jointly influence PSE. Our results demonstrate that whereas auditory and visual cues both modulate PSE, visual cues are overall more potent than auditory cues. Visual and auditory cues are weighted such that visual cues are significantly more potent than auditory cues when visual cues are delivered to left hemispace. Visual and auditory cues are equipotent when visual cues are delivered to right hemispace. These results are consistent with the existence of independent lateralized networks governing the deployment of visuospatial and audiospatial attention. An analysis of the weighting of unisensory visual and auditory cues which optimally predicts PSE in multisensory cue conditions shows that cues combine additively. There was no evidence for a superadditive multisensory cue combination.Item Open Access Human and machine learning in non- markovian decision making(Public Library of Science, 2015) Clarke, Aaron; Friedrich, J.; Tartaglia, E.; Marchesotti, S.; Senn, W.; Herzog, M.Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.Item Open Access Is there a common factor for vision?(ARVO, 2014) Cappe, C.; Clarke, Aaron; Mohr, C.; Herzog, M.Abstract In cognition, common factors play a crucial role. For example, different types of intelligence are highly correlated, pointing to a common factor, which is often called g. One might expect that a similar common factor would also exist for vision. Surprisingly, no one in the field has addressed this issue. Here, we provide the first evidence that there is no common factor for vision. We tested 40 healthy students' performance in six basic visual paradigms: visual acuity, vernier discrimination, two visual backward masking paradigms, Gabor detection, and bisection discrimination. One might expect that performance levels on these tasks would be highly correlated because some individuals generally have better vision than others due to superior optics, better retinal or cortical processing, or enriched visual experience. However, only four out of 15 correlations were significant, two of which were nontrivial. These results cannot be explained by high intraobserver variability or ceiling effects because test–retest reliability was high and the variance in our student population is commensurate with that from other studies with well-sighted populations. Using a variety of tests (e.g., principal components analysis, Bayes theorem, test–retest reliability), we show the robustness of our null results. We suggest that neuroplasticity operates during everyday experience to generate marked individual differences. Our results apply only to the normally sighted population (i.e., restricted range sampling). For the entire population, including those with degenerate vision, we expect different results.Item Open Access No evidence for a common factor underlying visual abilities in healthy older people(American Psychological Association, 2019) Shaqiri, A.; Pilz, K. S.; Cretenoud, A. F.; Neumann, K.; Clarke, Aaron; Kunchulia, M.; Herzog, M. H.The world’s population is aging at an increasing rate. Even in the absence of neurodegenerative disorders, healthy aging affects perception and cognition. In the context of cognition, common factors are well established. Much less is known about common factors for vision. Here, we tested 92 healthy older and 104 healthy younger participants in 19 visual tests (including visual search and contrast sensitivity) and three cognitive tests (including verbal fluency and digit span). Unsurprisingly, younger participants performed better than older participants in almost all tests. Surprisingly, however, the performance of older participants was mostly uncorrelated between visual tests, and we found no evidence for a common factor.Item Open Access Trait anxiety and post-learning stress do not affect perceptual learning(Elsevier, 2012) Aberg, K.; Clarke, Aaron; Sandi, C.; Herzog, M.While it is well established that stress can modulate declarative learning, very few studies have investigated the influence of stress on non-declarative learning. Here, we studied the influence of post-learning stress, which effectively modulates declarative learning, on perceptual learning of a visual texture discrimination task (TDT). On day one, participants trained for one session with TDT and were instructed that they, at any time, could be exposed to either a high stressor (ice–water; Cold Pressor Test; CPT) or a low stressor (warm water). Participants did not know when or which stressor they would be exposed to. To determine the impact of the stressor on TDT learning, all participants returned the following day to perform another TDT session. Only participants exposed to the high stressor had significantly elevated cortisol levels. However, there was no difference in TDT improvements from day one to day two between the groups. Recent studies suggested that trait anxiety modulates visual perception under anticipation of stressful events. Here, trait anxiety did neither modulate performance nor influence responsiveness to stress. These results do not support a modulatory role for stress on non-declarative perceptual learning.Item Open Access Visual crowding illustrates the inadequacy of local vs. global and feedforward vs. feedback distinctions in modeling visual perception(Frontiers, 2014) Clarke, Aaron; Herzog, M.; Francis, G.Experimentaliststendtoclassifymodelsofvisualperceptionasbeingeitherlocalorglobal,andinvolvingeitherfeedforwardorfeedbackprocessing.Wearguethatthesedistinctionsarenotashelpfulastheymightappear,andweillustratetheseissuesbyanalyzingmodelsofvisualcrowdingasanexample.Recentstudieshavearguedthatcrowdingcannotbeexplainedbypurelylocalprocessing,butthatinstead,globalfactorssuchasperceptualgroupingarecrucial.Theoriesofperceptualgrouping,inturn,ofteninvokefeedbackconnectionsasawaytoaccountfortheirglobalproperties.Weexaminedthreetypesofcrowdingmodelsthatarerepresentativeofglobalprocessingmodels,andtwoofwhichemployfeedbackprocessing:amodelbasedonFourierfiltering,afeedbackneuralnetwork,andaspecificfeedbackneuralarchitecturethatexplicitlymodelsperceptualgrouping.Simulationsdemonstratethatcrucialempiricalfindingsarenotaccountedforbyanyofthemodels.Weconcludethatempiricalinvestigationsthatrejectalocalorfeedforwardarchitectureofferalmostnoconstraintsformodelconstruction,asthereareanuncountablenumberofglobalandfeedbacksystems.Weproposethattheidentificationofasystemasbeinglocalorglobalandfeedforwardorfeedbackislessimportantthantheidentificationofasystem’scomputationaldetails.Onlythelatterinformationcanprovideconstraintsonmodeldevelopmentandpromotequantitativeexplanationsofcomplexphenomena.Item Open Access What crowding can tell us about object representations(Association for Research in Vision and Ophthalmology Inc., 2016) Manassi, M.; Lonchampt, S.; Clarke, Aaron; Herzog, M. H.In crowding, perception of a target usually deteriorates when flanking elements are presented next to the target. Surprisingly, adding further flankers can lead to a release from crowding. In previous work we showed that, for example, vernier offset discrimination at 9� of eccentricity deteriorated when a vernier was embedded in a square. Adding further squares improved performance. The more squares presented, the better the performance, extending across 20� of the visual field. Here, we show that very similar results hold true for shapes other than squares, including unfamiliar, irregular shapes. Hence, uncrowding is not restricted to simple and familiar shapes. Our results provoke the question of whether any type of shape is represented at any location in the visual field. Moreover, small changes in the orientation of the flanking shapes led to strong increases in crowding strength. Hence, highly specific shape-specific interactions across large parts of the visual field determine vernier acuity.Item Open Access What to choose next? a paradigm for testing human sequential decision making(Frontiers Research Foundation, 2017) Tartaglia, E. M.; Clarke, Aaron; Herzog, M. H.Many of the decisions we make in our everyday lives are sequential and entail sparse rewards. While sequential decision-making has been extensively investigated in theory (e.g., by reinforcement learning models) there is no systematic experimental paradigm to test it. Here, we developed such a paradigm and investigated key components of reinforcement learning models: the eligibility trace (i.e., the memory trace of previous decision steps), the external reward, and the ability to exploit the statistics of the environment's structure (model-free vs. model-based mechanisms). We show that the eligibility trace decays not with sheer time, but rather with the number of discrete decision steps made by the participants. We further show that, unexpectedly, neither monetary rewards nor the environment's spatial regularity significantly modulate behavioral performance. Finally, we found that model-free learning algorithms describe human performance better than model-based algorithms.Item Open Access Why vision is not both hierarchical and feedforward(Frontiers, 2014) Herzog, M.; Clarke, AaronIn classical models of object recognition, first, basic features (e.g., edges and lines) are analyzed by independent filters that mimic the receptive field profiles of V1 neurons. In a feedforward fashion, the outputs of these filters are fed to filters at the next processing stage, pooling information across several filters from the previous level, and so forth at subsequent processing stages. Low-level processing determines high-level processing. Information lost on lower stages is irretrievably lost. Models of this type have proven to be very successful in many fields of vision, but have failed to explain object recognition in general. Here, we present experiments that, first, show that, similar to demonstrations from the Gestaltists, figural aspects determine low-level processing (as much as the other way around). Second, performance on a single element depends on all the other elements in the visual scene. Small changes in the overall configuration can lead to large changes in performance. Third, grouping of elements is key. Only if we know how elements group across the entire visual field, we can determine performance on individual elements, i.e., challenging the classical stereotypical filtering approach, which is at the very heart of most vision models.