BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Continuous control tasks"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Novel sampling strategies for experience replay mechanisms in off-policy deep reinforcement learning algorithms
    (2024-09) Mutlu, Furkan Burak
    Experience replay enables agents to effectively utilize their past experiences repeatedly to improve learning performance. Traditional strategies, such as vanilla experience replay, involve uniformly sampling from the replay buffer, which can lead to inefficiencies as they do not account for the varying importance of different transitions. More advanced methods, like Prioritized Experience Replay (PER), attempt to address this by adjusting the sampling probability of each transition according to its perceived importance. However, constantly recalculating these probabilities for every transition in the buffer after each iteration is computationally expensive and impractical for large-scale applications. Moreover, these methods do not necessarily enhance the performance of actor-critic-based reinforcement learning algorithms, as they typically rely on predefined metrics, such as Temporal Difference (TD) error, which do not directly represent the relevance of a transition to the agent’s policy. The importance of a transition can change dynamically throughout training, but existing approaches struggle to adapt to this due to computational constraints. Both vanilla sampling strategies and advanced methods like PER introduce biases toward certain transitions. Vanilla experience replay tends to favor older transitions, which may no longer be useful since they were often generated by a random policy during initialization. Meanwhile, PER is biased toward transitions with high TD errors, which primarily reflects errors in the critic network and may not correspond to improvements in the policy network, as there is no direct correlation between TD error and policy enhancement. Given these challenges, we propose a new sampling strategy designed to mitigate bias and ensure that every transition is used in updates an equal number of times. Our method, Corrected Uniform Experience Replay (CUER), leverages an efficient sum-tree structure to achieve fair sampling counts for all transitions. We evaluate CUER on various continuous control tasks and demonstrate that it outperforms both traditional and advanced replay mechanisms when applied to state-of-the-art off-policy deep reinforcement learning algorithms like TD3 and SAC. Empirical results indicate that CUER consistently improves sample efficiency without imposing a significant computational burden, leading to faster convergence and more stable learning performance.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback