Contact energy based hindsight experience prioritization
buir.contributor.author | Öğüz, Salih Özgür | |
buir.contributor.orcid | Öğüz, Salih Özgür|0000-0001-8723-1837 | |
dc.citation.epage | 5440 | |
dc.citation.spage | 5434 | |
dc.contributor.author | Sayar, Erdi | |
dc.contributor.author | Bing, Zhenshan | |
dc.contributor.author | D'Eramo, Carlo | |
dc.contributor.author | Öğüz, Salih Özgür | |
dc.contributor.author | Knoll, Alois | |
dc.coverage.spatial | Yokohama, JAPAN | |
dc.date.accessioned | 2025-02-22T08:50:32Z | |
dc.date.available | 2025-02-22T08:50:32Z | |
dc.date.issued | 2024-08-08 | |
dc.department | Department of Computer Engineering | |
dc.description | Conference Name: IEEE International Conference on Robotics and Automation (ICRA) | |
dc.description | Date of Conference:13-17 May 2024 | |
dc.description.abstract | Multi-goal robot manipulation tasks with sparse rewards are difficult for reinforcement learning (RL) algorithms due to the inefficiency in collecting successful experiences. Recent algorithms such as Hindsight Experience Replay (HER) expedite learning by taking advantage of failed trajectories and replacing the desired goal with one of the achieved states so that any failed trajectory can be utilized as a contribution to learning. However, HER uniformly chooses failed trajectories, without taking into account which ones might be the most valuable for learning. In this paper, we address this problem and propose a novel approach Contact Energy Based Prioritization (CEBP) to select the samples from the replay buffer based on rich information due to contact, leveraging the touch sensors in the gripper of the robot and object displacement. Our prioritization scheme favors sampling of contact-rich experiences, which are arguably the ones providing the largest amount of information. We evaluate our proposed approach on various sparse reward robotic tasks and compare it with the state-of-the-art methods. We show that our method surpasses or performs on par with those methods on robot manipulation tasks. Finally, we deploy the trained policy from our method to a real Franka robot for a pick-and-place task. We observe that the robot can solve the task successfully. The videos and code are publicly available at: https://erdiphd.github.io/HER force/. | |
dc.description.provenance | Submitted by Aleyna Demirkıran (aleynademirkiran@bilkent.edu.tr) on 2025-02-22T08:50:32Z No. of bitstreams: 1 Contact_Energy_Based_Hindsight_Experience_Prioritization (1).pdf: 1888322 bytes, checksum: 3794d2e58d21881d7828eda8a4c9fd12 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2025-02-22T08:50:32Z (GMT). No. of bitstreams: 1 Contact_Energy_Based_Hindsight_Experience_Prioritization (1).pdf: 1888322 bytes, checksum: 3794d2e58d21881d7828eda8a4c9fd12 (MD5) Previous issue date: 2024-08-08 | en |
dc.identifier.doi | 10.1109/ICRA57147.2024.10610910 | |
dc.identifier.isbn | 979-8-3503-8457-4 | |
dc.identifier.uri | https://hdl.handle.net/11693/116619 | |
dc.language.iso | English | |
dc.publisher | IEEE | |
dc.relation.ispartofseries | Book Series | |
dc.relation.isversionof | https://dx.doi.org/10.1109/ICRA57147.2024.10610910 | |
dc.source.title | 2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024 | |
dc.subject | Training | |
dc.subject | Codes | |
dc.subject | Catalysts | |
dc.subject | Tactile sensors | |
dc.subject | Reinforcement learning | |
dc.subject | Trajectory | |
dc.subject | Friction | |
dc.title | Contact energy based hindsight experience prioritization | |
dc.type | Conference Paper |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Contact_Energy_Based_Hindsight_Experience_Prioritization (1).pdf
- Size:
- 1.8 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: