Perception, learning and use of tool affordances on humanoid robots
Please cite this item using this persistent URLhttp://hdl.handle.net/11693/15862
Şahin, Pınar Duygulu
Humans and some animals use di erent tools for di erent aims such as extending reach, amplifying mechanical force, create or augment signal value of social display, camou age, bodily comfort and e ective control of uids. In robotics, tools are mostly used for extending the reach area of a robot. For this aim, the question \What kind of tool is better in which situation?" is very signi cant. The importance of a ordance concept rises with this question. That is because, di erent tools a ord variety of capabilities depending on target objects. Towards the aim of learning tool a ordances, robots should experience e ects by applying behaviors on di erent objects. In this study, our goal is to teach the humanoid robot iCub, the a ordances of tools by applying di erent behaviors on a variety of objects and observing the e ects of these interactions. Using eye camera and Kinect, tool and object features are obtained for each interaction to construct the training data. Success of a behavior depends on the tool features, object position and properties and also the hand that the robot uses the tool with. As a result of the training of each behavior, the robot successfully predicts e ects of di erent behaviors and infers the a ordances when a tool is given and an object is shown. When an a ordance is requested, the robot can apply the appropriate behavior given a tool and an object, the robot can select the best tool among di erent tools when a speci c a ordance is requested and an object is shown. This study also demonstrates how di erent positions and properties of objects a ect the a ordance and behavior results, and how a ordance and behavior results are a ected when a part of a tool is removed, modi ed or a new part is added.