BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Gender bias"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Analysis of gender bias in legal texts using natural language processing methods
    (2023-07) Sevim, Nurullah
    Word embeddings have become important building blocks that are used profoundly in natural language processing (NLP). Despite their several advantages, word embed-dings can unintentionally accommodate some gender- and ethnicity-based biases that are present within the corpora they are trained on. Therefore, ethical concerns have been raised since word embeddings are extensively used in several high level algorithms. Furthermore, transformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases as their training corpora generally consist of human-generated text. Evaluating and re-moving social biases in NLP models have been an ongoing and prominent research endeavor. In parallel, the NLP approaches in the legal area, namely legal NLP or computational law, have also been increasing recently. Eliminating unwanted bias in the legal domain is doubly crucial since the law has the utmost importance and effect on people. We approach the gender bias problem from the scope of legal text processing domain. In the first stage of our study, we focus on the gender bias in traditional word embeddings, like Word2Vec and GloVe. Word embedding models which are trained on corpora composed by legal documents and legislation from different countries have been utilized to measure and eliminate gender bias in legal documents. Several methods have been employed to reveal the degree of gender bias and observe its variations over countries. Moreover, a debiasing method has been used to neutralize unwanted bias. The preservation of semantic coherence of the debiased vector space has also been demonstrated by using high level tasks. In the second stage, we study the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models on the LexGLUE benchmark to confirm that the under-lying semantic vector space is not perturbed during the debiasing process. Finally, overall results and their implications have been discussed in the scope of NLP in legal domain.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Measuring and mitigating gender bias in legal contextualized language models
    (Association for Computing Machinery, 2024-02-13) Bozdağ, Mustafa; Sevim, Nurullah; Koç, Aykut
    Transformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases, as their training corpora generally consist of human-generated text. Evaluating and removing social biases in NLP models has been a major research endeavor. In parallel, NLP approaches in the legal domain, namely, legal NLP or computational law, have also been increasing. Eliminating unwanted bias in legal NLP is crucial, since the law has the utmost importance and effect on people. In this work, we focus on the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a new bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a new fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models’ language understanding performance on the LexGLUE benchmark to confirm that the underlying semantic vector space is not perturbed during the debiasing process. Finally, we propose a bias penalty for the performance scores to emphasize the effect of gender bias on model performance.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Türkçe kelime temsillerinde cinsiyetçi ön yargının incelenmesi
    (IEEE, 2021-07-19) Sevim, Nurullah; Koç, Aykut
    Doğal Dil İşleme uygulamalarında cinsiyetçi ön yargının incelenmesi, olası bir cinsiyetçi yaklaşımın olumsuz sonuçlarından dolayı son zamanlarda önem kazanmıştır. Özellikle İngilizce kelime temsillerinde bu tür ön yargılar çeşitli bağlamlarda incelenerek birçok araştırma yapılmıştır. Bu çalışmada Türkçe kelime temsillerinin cinsiyetçi ön yargılar açısından durumu incelenmiştir ve Türkçe dil yapısı İngilizce dil yapısı ile cinsiyetçi ön yargılar kapsamında karşılaştırılmıştır. Kelime temsillerinde yapılan cinsiyetçi ön yargıların ölçümü sonucunda Türkçe’nin İngilizce’ye kıyasla dil yapısında cinsiyetçi ön yargıyı daha az barındırdığı sonucuna varılmıştır.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback