Novel sampling strategies for experience replay mechanisms in off-policy deep reinforcement learning algorithms
buir.advisor | Kozat, Süleyman Serdar | |
dc.contributor.author | Mutlu, Furkan Burak | |
dc.date.accessioned | 2024-09-19T08:51:17Z | |
dc.date.available | 2024-09-19T08:51:17Z | |
dc.date.copyright | 2024-09 | |
dc.date.issued | 2024-09 | |
dc.date.submitted | 2024-09-17 | |
dc.description | Cataloged from PDF version of article. | |
dc.description | Thesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2024. | |
dc.description | Includes bibliographical references (leaves 52-55). | |
dc.description.abstract | Experience replay enables agents to effectively utilize their past experiences repeatedly to improve learning performance. Traditional strategies, such as vanilla experience replay, involve uniformly sampling from the replay buffer, which can lead to inefficiencies as they do not account for the varying importance of different transitions. More advanced methods, like Prioritized Experience Replay (PER), attempt to address this by adjusting the sampling probability of each transition according to its perceived importance. However, constantly recalculating these probabilities for every transition in the buffer after each iteration is computationally expensive and impractical for large-scale applications. Moreover, these methods do not necessarily enhance the performance of actor-critic-based reinforcement learning algorithms, as they typically rely on predefined metrics, such as Temporal Difference (TD) error, which do not directly represent the relevance of a transition to the agent’s policy. The importance of a transition can change dynamically throughout training, but existing approaches struggle to adapt to this due to computational constraints. Both vanilla sampling strategies and advanced methods like PER introduce biases toward certain transitions. Vanilla experience replay tends to favor older transitions, which may no longer be useful since they were often generated by a random policy during initialization. Meanwhile, PER is biased toward transitions with high TD errors, which primarily reflects errors in the critic network and may not correspond to improvements in the policy network, as there is no direct correlation between TD error and policy enhancement. Given these challenges, we propose a new sampling strategy designed to mitigate bias and ensure that every transition is used in updates an equal number of times. Our method, Corrected Uniform Experience Replay (CUER), leverages an efficient sum-tree structure to achieve fair sampling counts for all transitions. We evaluate CUER on various continuous control tasks and demonstrate that it outperforms both traditional and advanced replay mechanisms when applied to state-of-the-art off-policy deep reinforcement learning algorithms like TD3 and SAC. Empirical results indicate that CUER consistently improves sample efficiency without imposing a significant computational burden, leading to faster convergence and more stable learning performance. | |
dc.description.provenance | Submitted by İlknur Sarıkaya (ilknur.sarikaya@bilkent.edu.tr) on 2024-09-19T08:51:17Z No. of bitstreams: 1 B162661.pdf: 1555309 bytes, checksum: 1cd252ee16c26bd29969ab35c372cfaa (MD5) | en |
dc.description.provenance | Made available in DSpace on 2024-09-19T08:51:17Z (GMT). No. of bitstreams: 1 B162661.pdf: 1555309 bytes, checksum: 1cd252ee16c26bd29969ab35c372cfaa (MD5) Previous issue date: 2024-09 | en |
dc.description.statementofresponsibility | by Furkan Burak Mutlu | |
dc.format.extent | xii, 55 leaves : illustrations, charts ; 30 cm. | |
dc.identifier.itemid | B162661 | |
dc.identifier.uri | https://hdl.handle.net/11693/115831 | |
dc.language.iso | English | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject | Experience replay | |
dc.subject | Reinforcement learning | |
dc.subject | Actor-critic algorithms | |
dc.subject | Offpolicy | |
dc.subject | Deep learning | |
dc.subject | Continuous control tasks | |
dc.title | Novel sampling strategies for experience replay mechanisms in off-policy deep reinforcement learning algorithms | |
dc.title.alternative | Derin deterministik politika gradyanı algoritmaları için yeni tecrübe tekrarı stratejileri | |
dc.type | Thesis | |
thesis.degree.discipline | Electrical and Electronic Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |