Ertuğrul, I. Ö.Jeni, L. A.Dibeklioğlu, H.2019-02-212019-02-2120180262-8856http://hdl.handle.net/11693/49889Analysis of kinship from facial images or videos is an important problem. Prior machine learning and computer vision studies approach kinship analysis as a verification or recognition task. In this paper, for the first time in the literature, we propose a kinship synthesis framework, which generates smile and disgust videos of (probable) children from the expression videos (smile and disgust) of parents. While the appearance of a child's expression is learned using a convolutional encoder-decoder network, another neural network models the dynamics of the corresponding expression. The expression video of the estimated child is synthesized by the combined use of appearance and dynamics models. In order to validate our results, we perform kinship verification experiments using videos of real parents and estimated children generated by our framework. The results show that generated videos of children achieve higher correct verification rates than those of real children. Our results also indicate that the use of generated videos together with the real ones in the training of kinship verification models, increases the accuracy, suggesting that such videos can be used as a synthetic dataset. Furthermore, we evaluate the expression similarity between input and output frames, and show that the proposed method can fairly retain the expression of input faces while transforming the facial identity.EnglishFacial action unitsFacial dynamicsKinship synthesisKinship verificationTemporal analysisModeling and synthesis of kinship patterns of facial expressionsArticle10.1016/j.imavis.2018.09.012