Actor prioritized experience replay (abstract reprint)

buir.contributor.authorMutlu, Furkan
buir.contributor.authorÇiçek, Doğan
buir.contributor.authorKozat, Süleyman
dc.citation.issueNumber20
dc.citation.volumeNumber38
dc.contributor.authorSağlam, Baturay
dc.contributor.authorMutlu, Furkan
dc.contributor.authorÇiçek, Doğan
dc.contributor.authorKozat, Süleyman
dc.coverage.spatialVancouver, Canada
dc.date.accessioned2025-03-11T07:57:18Z
dc.date.available2025-03-11T07:57:18Z
dc.date.issued2024-03-24
dc.departmentDepartment of Electrical and Electronics Engineering
dc.descriptionConference Name: 38th AAAI Conference on Artificial Intelligence (AAAI) / 36th Conference on Innovative Applications of Artificial Intelligence / 14th Symposium on Educational Advances in Artificial Intelligence Date of Conference: February 20-27, 2024
dc.description.abstractA widely-studied deep reinforcement learning (RL) technique known as Prioritized Experience Replay (PER) allows agents to learn from transitions sampled with non-uniform probability proportional to their temporal-difference (TD) error. Although it has been shown that PER is one of the most crucial components for the overall performance of deep RL methods in discrete action domains, many empirical studies indicate that it considerably underperforms off-policy actor-critic algorithms. We theoretically show that actor networks cannot be effectively trained with transitions that have large TD errors. As a result, the approximate policy gradient computed under the Q-network diverges from the actual gradient computed under the optimal Q-function. Motivated by this, we introduce a novel experience replay sampling framework for actor-critic methods, which also regards issues with stability and recent findings behind the poor empirical performance of PER. The introduced algorithm suggests a new branch of improvements to PER and schedules effective and efficient training for both actor and critic networks. An extensive set of experiments verifies our theoretical findings, showing that our method outperforms competing approaches and achieves state-of-the-art results over the standard off-policy actor-critic algorithms.
dc.identifier.doi10.1609/aaai.v38i20.30610
dc.identifier.eissn2374-3468
dc.identifier.issn2159-5399
dc.identifier.urihttps://hdl.handle.net/11693/117070
dc.language.isoEnglish
dc.publisherAAAI Press
dc.relation.ispartofThirty-eighth AAAI conference on artificial intelligence
dc.relation.ispartofseriesAAAI Conference on Artificial Intelligence
dc.relation.isversionofhttps://dx.doi.org/10.1609/aaai.v38i20.30610
dc.rightsCC BY 4.0 DEED (Attribution 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleThirty-eighth AAAI conference on artificial intelligence
dc.titleActor prioritized experience replay (abstract reprint)
dc.typeOther

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Actor_prioritized_experience_replay_(abstract_reprint).pdf
Size:
37.85 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.1 KB
Format:
Item-specific license agreed upon to submission
Description: