Imparting interpretability to word embeddings while preserving semantic structure

buir.contributor.authorUtlu, İhsan
buir.contributor.authorŞahinuç, Furkan
buir.contributor.authorÖzaktaş, Haldun M.
buir.contributor.authorKoç, Aykut
dc.citation.epage26en_US
dc.citation.spage1en_US
dc.contributor.authorŞenel, L. K.
dc.contributor.authorUtlu, İhsan
dc.contributor.authorŞahinuç, Furkan
dc.contributor.authorÖzaktaş, Haldun M.
dc.contributor.authorKoç, Aykut
dc.date.accessioned2021-03-08T08:11:14Z
dc.date.available2021-03-08T08:11:14Z
dc.date.issued2020
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.departmentNational Magnetic Resonance Research Center (UMRAM)en_US
dc.description.abstractAs a ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words, but the vectors corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. Manual human evaluation results have also been presented to further verify that the proposed method increases interpretability. We also demonstrate the preservation of semantic coherence of the resulting vector space using word-analogy/word-similarity tests and a downstream task. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.en_US
dc.description.provenanceSubmitted by Zeynep Aykut (zeynepay@bilkent.edu.tr) on 2021-03-08T08:11:14Z No. of bitstreams: 1 Imparting_interpretability_to_word_embeddings_while_preserving_semantic_structure.pdf: 1052975 bytes, checksum: 728464ddefefe2cbb6d981fd8078331e (MD5)en
dc.description.provenanceMade available in DSpace on 2021-03-08T08:11:14Z (GMT). No. of bitstreams: 1 Imparting_interpretability_to_word_embeddings_while_preserving_semantic_structure.pdf: 1052975 bytes, checksum: 728464ddefefe2cbb6d981fd8078331e (MD5) Previous issue date: 2020en
dc.identifier.doi10.1017/S1351324920000315en_US
dc.identifier.eissn1469-8110
dc.identifier.issn1351-3249
dc.identifier.urihttp://hdl.handle.net/11693/75864
dc.language.isoEnglishen_US
dc.publisherCambridge University Pressen_US
dc.relation.isversionofhttps://dx.doi.org/10.1017/S1351324920000315en_US
dc.source.titleNatural Language Engineeringen_US
dc.subjectWord embeddingsen_US
dc.subjectInterpretabilityen_US
dc.subjectComputational semanticsen_US
dc.titleImparting interpretability to word embeddings while preserving semantic structureen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Imparting_interpretability_to_word_embeddings_while_preserving_semantic_structure.pdf
Size:
1 MB
Format:
Adobe Portable Document Format
Description:
View / Download

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: