Browsing by Author "Herzog, M."
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access Deleterious effects of roving on learned tasks(Elsevier, 2014) Clarke, Aaron; Grzeczkowski, L.; Mast, F.; Gauthier, I.; Herzog, M.In typical perceptual learning experiments, one stimulus type (e.g., a bisection stimulus offset either to the left or right) is presented per trial. In roving, two different stimulus types (e.g., a 30′ and a 20′ wide bisection stimulus) are randomly interleaved from trial to trial. Roving can impair both perceptual learning and task sensitivity. Here, we investigate the relationship between the two. Using a bisection task, we found no effect of roving before training. We next trained subjects and they improved. A roving condition applied after training impaired sensitivity.Item Open Access Does spatio-temporal filtering account for nonretinotopic motion perception? Comment on Pooresmaeili, Cicchini, Morrone, and Burr (2012)(ARVO, 2013) Clarke, Aaron; Repnow, M.; Öğmen, H.; Herzog, M.Item Open Access Human and machine learning in non- markovian decision making(Public Library of Science, 2015) Clarke, Aaron; Friedrich, J.; Tartaglia, E.; Marchesotti, S.; Senn, W.; Herzog, M.Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.Item Open Access Is there a common factor for vision?(ARVO, 2014) Cappe, C.; Clarke, Aaron; Mohr, C.; Herzog, M.Abstract In cognition, common factors play a crucial role. For example, different types of intelligence are highly correlated, pointing to a common factor, which is often called g. One might expect that a similar common factor would also exist for vision. Surprisingly, no one in the field has addressed this issue. Here, we provide the first evidence that there is no common factor for vision. We tested 40 healthy students' performance in six basic visual paradigms: visual acuity, vernier discrimination, two visual backward masking paradigms, Gabor detection, and bisection discrimination. One might expect that performance levels on these tasks would be highly correlated because some individuals generally have better vision than others due to superior optics, better retinal or cortical processing, or enriched visual experience. However, only four out of 15 correlations were significant, two of which were nontrivial. These results cannot be explained by high intraobserver variability or ceiling effects because test–retest reliability was high and the variance in our student population is commensurate with that from other studies with well-sighted populations. Using a variety of tests (e.g., principal components analysis, Bayes theorem, test–retest reliability), we show the robustness of our null results. We suggest that neuroplasticity operates during everyday experience to generate marked individual differences. Our results apply only to the normally sighted population (i.e., restricted range sampling). For the entire population, including those with degenerate vision, we expect different results.Item Open Access Trait anxiety and post-learning stress do not affect perceptual learning(Elsevier, 2012) Aberg, K.; Clarke, Aaron; Sandi, C.; Herzog, M.While it is well established that stress can modulate declarative learning, very few studies have investigated the influence of stress on non-declarative learning. Here, we studied the influence of post-learning stress, which effectively modulates declarative learning, on perceptual learning of a visual texture discrimination task (TDT). On day one, participants trained for one session with TDT and were instructed that they, at any time, could be exposed to either a high stressor (ice–water; Cold Pressor Test; CPT) or a low stressor (warm water). Participants did not know when or which stressor they would be exposed to. To determine the impact of the stressor on TDT learning, all participants returned the following day to perform another TDT session. Only participants exposed to the high stressor had significantly elevated cortisol levels. However, there was no difference in TDT improvements from day one to day two between the groups. Recent studies suggested that trait anxiety modulates visual perception under anticipation of stressful events. Here, trait anxiety did neither modulate performance nor influence responsiveness to stress. These results do not support a modulatory role for stress on non-declarative perceptual learning.Item Open Access Visual crowding illustrates the inadequacy of local vs. global and feedforward vs. feedback distinctions in modeling visual perception(Frontiers, 2014) Clarke, Aaron; Herzog, M.; Francis, G.Experimentaliststendtoclassifymodelsofvisualperceptionasbeingeitherlocalorglobal,andinvolvingeitherfeedforwardorfeedbackprocessing.Wearguethatthesedistinctionsarenotashelpfulastheymightappear,andweillustratetheseissuesbyanalyzingmodelsofvisualcrowdingasanexample.Recentstudieshavearguedthatcrowdingcannotbeexplainedbypurelylocalprocessing,butthatinstead,globalfactorssuchasperceptualgroupingarecrucial.Theoriesofperceptualgrouping,inturn,ofteninvokefeedbackconnectionsasawaytoaccountfortheirglobalproperties.Weexaminedthreetypesofcrowdingmodelsthatarerepresentativeofglobalprocessingmodels,andtwoofwhichemployfeedbackprocessing:amodelbasedonFourierfiltering,afeedbackneuralnetwork,andaspecificfeedbackneuralarchitecturethatexplicitlymodelsperceptualgrouping.Simulationsdemonstratethatcrucialempiricalfindingsarenotaccountedforbyanyofthemodels.Weconcludethatempiricalinvestigationsthatrejectalocalorfeedforwardarchitectureofferalmostnoconstraintsformodelconstruction,asthereareanuncountablenumberofglobalandfeedbacksystems.Weproposethattheidentificationofasystemasbeinglocalorglobalandfeedforwardorfeedbackislessimportantthantheidentificationofasystem’scomputationaldetails.Onlythelatterinformationcanprovideconstraintsonmodeldevelopmentandpromotequantitativeexplanationsofcomplexphenomena.Item Open Access Why vision is not both hierarchical and feedforward(Frontiers, 2014) Herzog, M.; Clarke, AaronIn classical models of object recognition, first, basic features (e.g., edges and lines) are analyzed by independent filters that mimic the receptive field profiles of V1 neurons. In a feedforward fashion, the outputs of these filters are fed to filters at the next processing stage, pooling information across several filters from the previous level, and so forth at subsequent processing stages. Low-level processing determines high-level processing. Information lost on lower stages is irretrievably lost. Models of this type have proven to be very successful in many fields of vision, but have failed to explain object recognition in general. Here, we present experiments that, first, show that, similar to demonstrations from the Gestaltists, figural aspects determine low-level processing (as much as the other way around). Second, performance on a single element depends on all the other elements in the visual scene. Small changes in the overall configuration can lead to large changes in performance. Third, grouping of elements is key. Only if we know how elements group across the entire visual field, we can determine performance on individual elements, i.e., challenging the classical stereotypical filtering approach, which is at the very heart of most vision models.