Yılmaz, Selim FıratKaynak, Ergün BatuhanKoç, AykutDibeklioğlu, HamdiKozat, Süleyman Serdar2022-03-042022-03-042021-07-192162-237Xhttp://hdl.handle.net/11693/77683We investigate cross-lingual sentiment analysis, which has attracted significant attention due to its applications in various areas including market research, politics, and social sciences. In particular, we introduce a sentiment analysis framework in multi-label setting as it obeys Plutchik's wheel of emotions. We introduce a novel dynamic weighting method that balances the contribution from each class during training, unlike previous static weighting methods that assign non-changing weights based on their class frequency. Moreover, we adapt the focal loss that favors harder instances from single-label object recognition literature to our multi-label setting. Furthermore, we derive a method to choose optimal class-specific thresholds that maximize the macro-f1 score in linear time complexity. Through an extensive set of experiments, we show that our method obtains the state-of-the-art performance in seven of nine metrics in three different languages using a single model compared with the common baselines and the best performing methods in the SemEval competition. We publicly share our code for our model, which can perform sentiment analysis in 100 languages, to facilitate further research.EnglishCross-lingualLabel imbalanceMacro-f1 maximizationMulti-labelNatural language processing (NLP)Sentiment analysisSocial mediaMulti-label sentiment analysis on 100 languages with dynamic weighting for label imbalanceArticle10.1109/TNNLS.2021.30943042162-2388