Novel deep reinforcement learning algorithms for continuous control

buir.advisorKozat, Süleyman Serdar
dc.contributor.authorSağlam, Baturay
dc.date.accessioned2023-07-06T11:03:55Z
dc.date.available2023-07-06T11:03:55Z
dc.date.copyright2023-06
dc.date.issued2023-06
dc.date.submitted2023-06-23
dc.descriptionCataloged from PDF version of article.
dc.descriptionThesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2023.
dc.descriptionIncludes bibliographical references (leaves 67-74).
dc.description.abstractContinuous control deep reinforcement learning (RL) algorithms are capable of learning complex and high-dimensional policies directly from raw sensory inputs. However, they often face challenges related to sample efficiency and exploration, which limit their practicality for real-world applications. In light of this, we introduce two novel techniques that enhance the performance of continuous control deep RL algorithms by refining their experience replay and exploration mechanisms. The first technique introduces a novel framework for sampling experiences in actor-critic methods. Specifically designed to stabilize and prevent divergence caused by Prioritized Experience Replay (PER), our framework effectively trains both actor and critic networks by striking a balance between temporal-difference error and policy gradient. Through both theoretical analysis and empirical investigations, we demonstrate that our framework is effective in improving the performance of continuous control deep RL algorithms. The second technique encompasses a directed exploration strategy that relies on intrinsic motivation. Drawing inspiration from established theories on animal motivational systems and adapting them to the actor-critic setting, our strategy showcases its effectiveness by generating exploratory behaviors that are both informative and diverse. It achieves this by maximizing the error of the value function and unifying the ex-isting intrinsic exploration objectives in the literature. We evaluate the presented methods on various continuous control benchmarks and demonstrate that they outperform state-of-the-art methods while achieving new levels of performance in deep RL.
dc.description.provenanceMade available in DSpace on 2023-07-06T11:03:55Z (GMT). No. of bitstreams: 1 B162166.pdf: 47730379 bytes, checksum: 6f6e44fcf070b9da3148a8907badd615 (MD5) Previous issue date: 2023-06en
dc.description.statementofresponsibilityby Baturay Sağlam
dc.format.extentxv, 85 leaves : illustrations ; 30 cm.
dc.identifier.itemidB162166
dc.identifier.urihttps://hdl.handle.net/11693/112370
dc.language.isoEnglish
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectDeep reinforcement learning
dc.subjectContinuous control
dc.subjectOff-policy learning
dc.subjectExploitation-exploration
dc.titleNovel deep reinforcement learning algorithms for continuous control
dc.title.alternativeSürekli kontrol için yeni derin pekiştirmeli öğrenme algoritmaları
dc.typeThesis
thesis.degree.disciplineElectrical and Electronic Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
B162166.pdf
Size:
45.52 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: