Multimodal video-based personality recognition using Long Short-Term Memory and convolutional neural networks
Embargo Lift Date: 2020-01-16
Item Usage Stats
Personality computing and affective computing, where recognition of personality traits is essential, have gained increasing interest and attention in many research areas recently. The personality traits are described by the Five-Factor Model along five dimensions: openness, conscientiousness, extraversion, agreeableness, and neuroticism. We propose a novel approach to recognize these five personality traits of people from videos. Personality and emotion affect the speaking style, facial expressions, body movements, and linguistic factors in social contexts, and they are affected by environmental elements. For this reason, we develop a multimodal system to recognize apparent personality traits based on various modalities such as the face, environment, audio, and transcription features. In our method, we use modality-specific neural networks that learn to recognize the traits independently and we obtain a final prediction of apparent personality with a feature-level fusion of these networks. We employ pre-trained deep convolutional neural networks such as ResNet and VGGish networks to extract high-level features and Long Short-Term Memory networks to integrate temporal information. We train the large model consisting of modality-specific subnetworks using a two-stage training process. We first train the subnetworks separately and then fine-tune the overall model using these trained networks. We evaluate the proposed method using ChaLearn First Impressions V2 challenge dataset. Our approach obtains the best overall “mean accuracy” score, averaged over five personality traits, compared to the state-of-the-art.
Convolutional Neural Network (CNN)
Recurrent Neural Network (RNN)
Long Short-Term Memory (LSTM) network
Personality trait recognition