• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      The effect of gender bias on hate speech detection

      Thumbnail
      View / Download
      499.5 Kb
      Author(s)
      Şahinuç, F.
      Yılmaz, E. H.
      Toraman, Ç.
      Koç, Aykut
      Date
      2022-10-08
      Source Title
      Signal, Image and Video Processing
      Print ISSN
      1863-1703
      Publisher
      Springer
      Pages
      1 - 7
      Language
      English
      Type
      Article
      Item Usage Stats
      6
      views
      5
      downloads
      Abstract
      Hate speech against individuals or communities with different backgrounds is a major problem in online social networks. The domain of hate speech has spread to various topics, including race, religion, and gender. Although there are many efforts for hate speech detection in different domains and languages, the effects of gender identity are not solely examined in hate speech detection. Moreover, hate speech detection is mostly studied for particular languages, specifically English, but not low-resource languages, such as Turkish. We examine gender identity-based hate speech detection for both English and Turkish tweets. We compare the performances of state-of-the-art models using 20 k tweets per language. We observe that transformer-based language models outperform bag-of-words and deep learning models, while the conventional bag-of-words model has surprising performances, possibly due to offensive or hate-related keywords. Furthermore, we analyze the effect of debiased embeddings for hate speech detection. We find that the performance can be improved by removing the gender-related bias in neural embeddings since gender-biased words can have offensive or hateful implications.
      Keywords
      Debiased embedding
      Deep learning
      Gender identity
      Hate speech
      Language model
      Permalink
      http://hdl.handle.net/11693/111608
      Published Version (Please cite this version)
      https://doi.org/10.1007/s11760-022-02368-z
      Collections
      • Department of Electrical and Electronics Engineering 4011
      • National Magnetic Resonance Research Center (UMRAM) 301
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCoursesThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCourses

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 2976
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy