Sağlam, BaturayMutlu, Furkan BurakÇiçek, Doğan CanKozat, Süleyman Serdar2025-02-222025-02-222024-03-021370-4621https://hdl.handle.net/11693/116656Approximation of the value functions in value-based deep reinforcement learning induces overestimation bias, resulting in suboptimal policies. We show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. We first address the detrimental issues in the existing approaches that aim to overcome such underestimation error. Then, through extensive statistical analysis, we introduce a novel, parameter-free Deep Q-learning variant to reduce this underestimation bias in deterministic policy gradients. By sampling the weights of a linear combination of two approximate critics from a highly shrunk estimation bias interval, our Q-value update rule is not affected by the variance of the rewards received by the agents throughout learning. We test the performance of the introduced improvement on a set of MuJoCo and Box2D continuous control tasks and demonstrate that it outperforms the existing approaches and improves the baseline actor-critic algorithm in most of the environments tested.EnglishCC BY 4.0 Deed (Attribution 4.0 International)https://creativecommons.org/licenses/by/4.0/Deep reinforcement learningActor-critic methodsEstimation biasDeterministic policy gradientsContinuous controlParameter-Free Reduction of the Estimation Bias in Deep Reinforcement Learning for Deterministic Policy GradientsArticle10.1007/s11063-024-11461-y1573-773X