Browsing by Subject "Modeling languages"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Architecture framework for mapping parallel algorithms to parallel computing platforms(CEUR-WS, 2013) Tekinerdogan, Bedir; Arkin, E.Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, and the mapping of the algorithm to the logical configuration platform. Unfortunately, in current parallel computing approaches there does not seem to be precise modeling approaches for supporting the mapping process. The lack of a clear and precise modeling approach for parallel computing impedes the communication and analysis of the decisions for supporting the mapping of parallel algorithms to parallel computing platforms. In this paper we present an architecture framework for modeling the various views that are related to the mapping process. An architectural framework organizes and structures the proposed architectural viewpoints. We propose five coherent set of viewpoints for supporting the mapping of parallel algorithms to parallel computing platforms. We illustrate the architecture framework for the mapping of array increment algorithm to the parallel computing platform. Copyright © 2013 for the individual papers by the papers' authors.Item Open Access Semantic similarity between Turkish and European languages using word embeddings(IEEE, 2017) Şenel, Lütfü Kerem; Yücesoy, V.; Koç, A.; Çukur, TolgaRepresentation of words coming from vocabulary of a language as real vectors in a high dimensional space is called as word embeddings. Word embeddings are proven to be successful in modelling semantic relations between words and numerous natural language processing applications. Although developed mainly for English, word embeddings perform well for many other languages. In this study, semantic similarity between Turkish (two different corpora) and five basic European languages (English, German, French, Spanish, Italian) is calculated using word embeddings over a fixed vocabulary, obtained results are verified using statistical testing. Also, the effect of using different corpora, and additional preprocess steps on the performance of word embeddings on similarity and analogy test sets prepared for Turkish is studied.Item Open Access VISPool: enhancing transformer encoders with vector visibility graph neural networks(Association for Computational Linguistics, 2024-08-16) Alikaşifoğlu, Tuna; Aras, Arda Can; Koç, AykutThe emergence of transformers has revolutionized natural language processing (NLP), as evidenced in various NLP tasks. While graph neural networks (GNNs) show recent promise in NLP, they are not standalone replacements for transformers. Rather, recent research explores combining transformers and GNNs. Existing GNN-based approaches rely on static graph construction methods requiring excessive text processing, and most of them are not scalable with the increasing document and word counts. We address these limitations by proposing a novel dynamic graph construction method for text documents based on vector visibility graphs (VVGs) generated from transformer output. Then, we introduce visibility pooler (VISPool), a scalable model architecture that seamlessly integrates VVG convolutional networks into transformer pipelines. We evaluate the proposed model on the General Language Understanding Evaluation (GLUE) benchmark datasets. VISPool outperforms the baselines with less trainable parameters, demonstrating the viability of the visibility-based graph construction method for enhancing transformers with GNNs. © 2024 Association for Computational Linguistics.