Functional contour-following via haptic perception and reinforcement learning
buir.contributor.author | Tekin, Cem | |
dc.citation.epage | 72 | en_US |
dc.citation.issueNumber | 1 | en_US |
dc.citation.spage | 61 | en_US |
dc.citation.volumeNumber | 11 | en_US |
dc.contributor.author | Hellman, R. B. | en_US |
dc.contributor.author | Tekin, Cem | en_US |
dc.contributor.author | Schaar, M. V. | en_US |
dc.contributor.author | Santos, V. J. | en_US |
dc.date.accessioned | 2019-02-21T16:05:53Z | en_US |
dc.date.available | 2019-02-21T16:05:53Z | en_US |
dc.date.issued | 2018 | en_US |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description.abstract | Many tasks involve the fine manipulation of objects despite limited visual feedback. In such scenarios, tactile and proprioceptive feedback can be leveraged for task completion. We present an approach for real-time haptic perception and decision-making for a haptics-driven, functional contour-following task: The closure of a ziplock bag. This task is challenging for robots because the bag is deformable, transparent, and visually occluded by artificial fingertip sensors that are also compliant. A deep neural net classifier was trained to estimate the state of a zipper within a robot's pinch grasp. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards by balancing exploration versus exploitation of the state-action space. The C-MAB learner outperformed a benchmark Q-learner by more efficiently exploring the state-action space while learning a hard-to-code task. The learned C-MAB policy was tested with novel ziplock bag scenarios and contours (wire, rope). Importantly, this work contributes to the development of reinforcement learning approaches that account for limited resources such as hardware life and researcher time. As robots are used to perform complex, physically interactive tasks in unstructured or unmodeled environments, it becomes important to develop methods that enable efficient and effective learning with physical testbeds. | en_US |
dc.description.provenance | Made available in DSpace on 2019-02-21T16:05:53Z (GMT). No. of bitstreams: 1 Bilkent-research-paper.pdf: 222869 bytes, checksum: 842af2b9bd649e7f548593affdbafbb3 (MD5) Previous issue date: 2018 | en |
dc.description.sponsorship | The authors wish to thank Peter Aspinall for assistance with the construction of the robot testbed. This work was supported in part by National Science Foundation Awards #1461547, #1463960, and #1533983, and the Office of Naval Research Award #N00014-16-1-2468. | en_US |
dc.identifier.doi | 10.1109/TOH.2017.2753233 | en_US |
dc.identifier.eissn | 2329-4051 | |
dc.identifier.issn | 1939-1412 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/50279 | en_US |
dc.language.iso | English | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.isversionof | https://doi.org/10.1109/TOH.2017.2753233 | en_US |
dc.relation.project | 1461547 - 1463960 - 1533983 - Office of Naval Research, ONR: 00014-16-1-2468 | en_US |
dc.source.title | IEEE Transactions on Haptics | en_US |
dc.subject | Active touch | en_US |
dc.subject | Contour-following | en_US |
dc.subject | Decision making | en_US |
dc.subject | Haptic perception | en_US |
dc.subject | Manipulation | en_US |
dc.subject | Reinforcement learning | en_US |
dc.title | Functional contour-following via haptic perception and reinforcement learning | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Functional_Contour_following_via_Haptic_Perception_and_Reinforcement_Learning.pdf
- Size:
- 888.84 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version