Graph receptive transformer encoder for text classification

buir.contributor.authorAras, Arda Can
buir.contributor.authorAlikaşifoğlu, Tuna
buir.contributor.authorKoç, Aykut
buir.contributor.orcidAras, Arda Can|0009-0000-0378-1779
buir.contributor.orcidAlikaşifoğlu, Tuna|0000-0001-8030-8088
buir.contributor.orcidKoç, Aykut|0000-0002-6348-2663
dc.citation.epage359
dc.citation.spage347
dc.citation.volumeNumber10
dc.contributor.authorAras, Arda Can
dc.contributor.authorAlikaşifoğlu, Tuna
dc.contributor.authorKoç, Aykut
dc.date.accessioned2025-02-27T06:14:33Z
dc.date.available2025-02-27T06:14:33Z
dc.date.issued2024
dc.departmentDepartment of Electrical and Electronics Engineering
dc.departmentNational Magnetic Resonance Research Center (UMRAM)
dc.description.abstractBy employing attention mechanisms, transformers have made great improvements in nearly all NLP tasks, including text classification. However, the context of the transformer's attention mechanism is limited to single sequences, and their fine-tuning stage can utilize only inductive learning. Focusing on broader contexts by representing texts as graphs, previous works have generalized transformer models to graph domains to employ attention mechanisms beyond single sequences. However, these approaches either require exhaustive pre-training stages, learn only transductively, or can learn inductively without utilizing pre-trained models. To address these problems simultaneously, we propose the Graph Receptive Transformer Encoder (GRTE), which combines graph neural networks (GNNs) with large-scale pre-trained models for text classification in both inductive and transductive fashions. By constructing heterogeneous and homogeneous graphs over given corpora and not requiring a pre-training stage, GRTE can utilize information from both large-scale pre-trained models and graph-structured relations. Our proposed method retrieves global and contextual information in documents and generates word embeddings as a by-product of inductive inference. We compared the proposed GRTE with a wide range of baseline models through comprehensive experiments. Compared to the state-of-the-art, we demonstrated that GRTE improves model performances and offers computational savings up to ˜100×.
dc.identifier.doi10.1109/TSIPN.2024.3380362
dc.identifier.eissn2373-776X
dc.identifier.urihttps://hdl.handle.net/11693/116885
dc.language.isoEnglish
dc.publisherIEEE
dc.relation.isversionofhttps://dx.doi.org/10.1109/TSIPN.2024.3380362
dc.rightsCC BY-NC-ND 4.0 DEED (Attribution-NonCommercial-NoDerivatives 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.source.titleIEEE Transactions on Signal and Information Processing over Networks
dc.subjectBERT
dc.subjectGraph convolutional networks (GCNs)
dc.subjectGraph neural networks (GNNs)
dc.subjectInductive
dc.subjectText classification
dc.subjectTransductive
dc.subjectTransformers
dc.titleGraph receptive transformer encoder for text classification
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Graph_Receptive_Transformer_Encoder_for_Text_Classification.pdf
Size:
2.46 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: