Measuring and mitigating gender bias in legal contextualized language models

buir.contributor.authorBozdağ, Mustafa
buir.contributor.authorSevim, Nurullah
buir.contributor.authorKoç, Aykut
buir.contributor.orcidSevim, Nurullah|0009-0000-0790-0587
buir.contributor.orcidKoç, Aykut|0000-0002-6348-2663
buir.contributor.orcidBozdağ, Mustafa|0009-0007-9090-8555
dc.citation.epage26
dc.citation.issueNumber4
dc.citation.spage1
dc.citation.volumeNumber18
dc.contributor.authorBozdağ, Mustafa
dc.contributor.authorSevim, Nurullah
dc.contributor.authorKoç, Aykut
dc.date.accessioned2025-02-23T17:45:56Z
dc.date.available2025-02-23T17:45:56Z
dc.date.issued2024-02-13
dc.departmentDepartment of Electrical and Electronics Engineering
dc.descriptionLegalBERT
dc.description.abstractTransformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases, as their training corpora generally consist of human-generated text. Evaluating and removing social biases in NLP models has been a major research endeavor. In parallel, NLP approaches in the legal domain, namely, legal NLP or computational law, have also been increasing. Eliminating unwanted bias in legal NLP is crucial, since the law has the utmost importance and effect on people. In this work, we focus on the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a new bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a new fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models’ language understanding performance on the LexGLUE benchmark to confirm that the underlying semantic vector space is not perturbed during the debiasing process. Finally, we propose a bias penalty for the performance scores to emphasize the effect of gender bias on model performance.
dc.identifier.doi10.1145/3628602
dc.identifier.eissn1556-472X
dc.identifier.issn1556-4681
dc.identifier.urihttps://hdl.handle.net/11693/116698
dc.language.isoEnglish
dc.publisherAssociation for Computing Machinery
dc.relation.isversionofhttps://doi.org/
dc.rightsCC BY 4.0 (Attribution 4.0 International Deed)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleACM Journals
dc.subjectLegal NLP
dc.subjectGender bias
dc.subjectContextualized models
dc.subjectBERT
dc.subjectLegalBERT
dc.titleMeasuring and mitigating gender bias in legal contextualized language models
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Measuring_and_Mitigating_Gender_Bias_in_Legal_Contextualized_Language_Models.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: