Human and machine learning in non- markovian decision making

buir.contributor.authorClarke, Aaron
dc.citation.issueNumber4en_US
dc.citation.spagePLoS ONEen_US
dc.citation.volumeNumber10en_US
dc.contributor.authorClarke, Aaronen_US
dc.contributor.authorFriedrich, J.en_US
dc.contributor.authorTartaglia, E.en_US
dc.contributor.authorMarchesotti, S.en_US
dc.contributor.authorSenn, W.en_US
dc.contributor.authorHerzog, M.en_US
dc.date.accessioned2020-04-09T14:50:28Z
dc.date.available2020-04-09T14:50:28Z
dc.date.issued2015
dc.departmentAysel Sabuncu Brain Research Center (BAM)en_US
dc.description.abstractHumans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.en_US
dc.description.provenanceSubmitted by Onur Emek (onur.emek@bilkent.edu.tr) on 2020-04-09T14:50:28Z No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5)en
dc.description.provenanceMade available in DSpace on 2020-04-09T14:50:28Z (GMT). No. of bitstreams: 1 Bilkent-research-paper.pdf: 268963 bytes, checksum: ad2e3a30c8172b573b9662390ed2d3cf (MD5) Previous issue date: 2015en
dc.identifier.doi10.1371/journal.pone.0123105en_US
dc.identifier.issn1932-6203
dc.identifier.urihttp://hdl.handle.net/11693/53571
dc.language.isoEnglishen_US
dc.publisherPublic Library of Scienceen_US
dc.relation.isversionofhttps://doi.org/10.1371/journal.pone.0123105en_US
dc.source.titlePLoS ONEen_US
dc.titleHuman and machine learning in non- markovian decision makingen_US
dc.typeArticleen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bilkent-research-paper.pdf
Size:
262.66 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: