Department of Computer Engineering
Permanent URI for this communityhttps://hdl.handle.net/11693/115574
Browse
Browsing Department of Computer Engineering by Title
Now showing 1 - 20 of 1670
- Results Per Page
- Sort Options
Item Open Access 1.5D parallel sparse matrix-vector multiply(Society for Industrial and Applied Mathematics, 2018) Kayaaslan, E.; Aykanat, Cevdet; Uçar, B.There are three common parallel sparse matrix-vector multiply algorithms: 1D row-parallel, 1D column-parallel, and 2D row-column-parallel. The 1D parallel algorithms offer the advantage of having only one communication phase. On the other hand, the 2D parallel algorithm is more scalable, but it suffers from two communication phases. Here, we introduce a novel concept of heterogeneous messages where a heterogeneous message may contain both input-vector entries and partially computed output-vector entries. This concept not only leads to a decreased number of messages but also enables fusing the input- and output-communication phases into a single phase. These findings are exploited to propose a 1.5D parallel sparse matrix-vector multiply algorithm which is called local row-column-parallel. This proposed algorithm requires a constrained fine-grain partitioning in which each fine-grain task is assigned to the processor that contains either its input-vector entry, its output-vector entry, or both. We propose two methods to carry out the constrained fine-grain partitioning. We conduct our experiments on a large set of test matrices to evaluate the partitioning qualities and partitioning times of these proposed 1.5D methods.Item Open Access 2010 IAPR workshop on pattern recognition in remote sensing, PRRS 2010: preface(2010) Aksoy, S.; Younan, N. H.; Forstner, W.Item Open Access 3D Hair sketching for real-time dynamic & key frame animations(Springer, 2008-07) Aras, R.; Başarankut, B.; Çapın, T.; Özgüç, B.Physically based simulation of human hair is a well studied and well known problem. But the "pure" physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to "control" the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time. © 2008 Springer-Verlag.Item Open Access 3D human pose search using oriented cylinders(IEEE, 2009-09-10) Pehlivan, Selen; Duygulu, PınarIn this study, we present a representation based on a new 3D search technique for volumetric human poses which is then used to recognize actions in three dimensional video sequences. We generate a set of cylinder like 3D kernels in various sizes and orientations. These kernels are searched over 3D volumes to find high response regions. The distribution of these responses are then used to represent a 3D pose. We use the proposed representation for (i) pose retrieval using Nearest Neighbor (NN) based classification and Support Vector Machine (SVM) based classification methods, and for (ii) action recognition on a set of actions using Dynamic Time Warping (DTW) and Hidden Markov Model (HMM) based classification methods. Evaluations on IXMAS dataset supports the effectiveness of such a robust pose representation. ©2009 IEEE.Item Open Access 3D model compression using connectivity-guided adaptive wavelet transform built into 2D SPIHT(Academic Press, 2010-01) Köse K.; Çetin, A. Enis; Güdükbay, Uğur; Onural, L.Connectivity-Guided Adaptive Wavelet Transform based mesh compression framework is proposed. The transformation uses the connectivity information of the 3D model to exploit the inter-pixel correlations. Orthographic projection is used for converting the 3D mesh into a 2D image-like representation. The proposed conversion method does not change the connectivity among the vertices of the 3D model. There is a correlation between the pixels of the composed image due to the connectivity of the 3D mesh. The proposed wavelet transform uses an adaptive predictor that exploits the connectivity information of the 3D model. Known image compression tools cannot take advantage of the correlations between the samples. The wavelet transformed data is then encoded using a zero-tree wavelet based method. Since the encoder creates a hierarchical bitstream, the proposed technique is a progressive mesh compression technique. Experimental results show that the proposed method has a better rate distortion performance than MPEG-3DGC/MPEG-4 mesh coder.Item Open Access 3D thumbnails for 3D videos with depth(IGI Global, 2011) Yigit, Y.; Isler, S. F.; Capin, T.In this chapter, we present a new thumbnail format for 3D videos with depth, 3D thumbnail, which helps users to understand the content by preserving the recognizable features and qualities of 3D videos. The current thumbnail solutions do not give the general idea of the content and are not illustrative. In spite of the existence of 3D media content databases, there is no thumbnail representation for 3D contents. Thus, we propose a framework that generates 3D thumbnails from layered depth video (LDV) and video plus depth (V+D) by using two different methodologies on importance maps: Saliency-depth and layer based approaches. Finally, several experiments are presented that indicate 3D thumbnails are illustrative. © 2012, IGI Global.Item Open Access 3D thumbnails for mobile media browser interface with autostereoscopic displays(Springer, 2010-01) Gündoğdu, R. Bertan; Yiğit, Yeliz; Çapin, TolgaIn this paper, we focus on the problem of how to visualize and browse 3D videos and 3D images in a media browser application, running on a 3D-enabled mobile device with an autostereoscopic display. We propose a 3D thumbnail representation format and an algorithm for automatic 3D thumbnail generation from a 3D video + depth content. Then, we present different 3D user interface layout schemes for 3D thumbnails, and discuss these layouts with the focus on their usability and ergonomics. © 2010 Springer-Verlag Berlin Heidelberg.Item Open Access 3DTV-conference: the true vision-capture, transmission and display of 3D video, 3DTV-CON 2008 proceedings: preface(2008) Güdükbay, U.; Alatan, A. A.Item Open Access 60 GHz wireless data center networks: A survey(Elsevier BV * North-Holland, 2021-02-11) Terzi, Çağlar; Körpeoğlu, İbrahimData centers (DCs) became an important part of computing today. A lot of services in Internet are run on DCs. Meanwhile a lot of research is done to tackle the challenges of high-performance and energy-efficient data center networking (DCN). Hot node congestion, cabling complexity/cost, and cooling cost are some of the important issues about data centers that need further investigation. Static and rigid topology in wired DCNs is an other issue that hinders flexibility. Use of wireless links for DCNs to eliminate these disadvantages is proposed and is an important research topic. In this paper, we review research studies in literature about the design of radio frequency (RF) based wireless data center networks. RF wireless DCNs can be grouped into two as hybrid (wireless and wired) and completely wireless data centers. We investigate both. We also compare wireless DCN solutions in the literature with respect to various aspects. Open areas and research ideas are also discussed.Item Open Access A broad ensemble learning system for drifting stream classification(Institute of Electrical and Electronics Engineers, 2023-08-21) Bakhshi, Sepehr; Ghahramanian, Pouya; Bonab, H.; Can, FazlıIn a data stream environment, classification models must effectively and efficiently handle concept drift. Ensemble methods are widely used for this purpose; however, the ones available in the literature either use a large data chunk to update the model or learn the data one by one. In the former, the model may miss the changes in the data distribution, while in the latter, the model may suffer from inefficiency and instability. To address these issues, we introduce a novel ensemble approach based on the Broad Learning System (BLS), where mini chunks are used at each update. BLS is an effective lightweight neural architecture recently developed for incremental learning. Although it is fast, it requires huge data chunks for effective updates and is unable to handle dynamic changes observed in data streams. Our proposed approach, named Broad Ensemble Learning System (BELS), uses a novel updating method that significantly improves best-in class model accuracy. It employs an ensemble of output layers to address the limitations of BLS and handle drifts. Our model tracks the changes in the accuracy of the ensemble components and reacts to these changes. We present our mathematical derivation of BELS, perform comprehensive experiments with 35 datasets that demonstrate the adaptability of our model to various drift types, and provide its hyperparameter, ablation, and imbalanced dataset performance analysis. The experimental results show that the proposed approach outperforms 10 state-of-the-art baselines, and supplies an overall improvement of 18.59% in terms of average prequential accuracy.Item Open Access A guide for developing comprehensive systems biology maps of disease mechanisms: planning, construction and maintenance(Frontiers Media S.A., 2023-06-22) Mazein, A.; Acencio, M. L.; Balaur, I.; Rougny, A.; Welter, D.; Niarakis, A.; Ramirez Ardila, D.; Doğrusöz, Uğur; Gawron, P.; Satagopam, V.; Gu, W.; Kremer, A.; Schneider, R.; Ostaszewski, M.As a conceptual model of disease mechanisms, a disease map integrates available knowledge and is applied for data interpretation, predictions and hypothesis generation. It is possible to model disease mechanisms on different levels of granularity and adjust the approach to the goals of a particular project. This rich environment together with requirements for high-quality network reconstruction makes it challenging for new curators and groups to be quickly introduced to the development methods. In this review, we offer a step-by-step guide for developing a disease map within its mainstream pipeline that involves using the CellDesigner tool for creating and editing diagrams and the MINERVA Platform for online visualisation and exploration. We also describe how the Neo4j graph database environment can be used for managing and querying efficiently such a resource. For assessing the interoperability and reproducibility we apply FAIR principles.Item Open Access A novel neural ensemble architecture for on-the-fly classification of evolving text streams(Association for Computing Machinery (ACM) , 2024) Ghahramanian, Pouya; Bakhshi, Sepehr; Bonab, Hamed; Can, FazlıWe study on-the-fly classification of evolving text streams in which the relation between the input data and target labels changes over time-i.e., "concept drift." These variations decrease the model's performance, as predictions become less accurate over time and they necessitate a more adaptable system. While most studies focus on concept drift detection and handling with ensemble approaches, the application of neural models in this area is relatively less studied. We introduce Adaptive Neural Ensemble Network (AdaNEN), a novel ensemble-based neural approach, capable of handling concept drift in data streams. With our novel architecture, we address some of the problems neural models face when exploited for online adaptive learning environments. Most current studies address concept drift detection and handling in numerical streams, and the evolving text stream classification remains relatively unexplored. We hypothesize that the lack of public and large-scale experimental data could be one reason. To this end, we propose a method based on an existing approach for generating evolving text streams by introducing various types of concept drifts to real-world text datasets. We provide an extensive evaluation of our proposed approach using 12 state-of-the-art baselines and 13 datasets. We first evaluate concept drift handling capability of AdaNEN and the baseline models on evolving numerical streams; this aims to demonstrate the concept drift handling capabilities of our method on a general spectrum and motivate its use in evolving text streams. The models are then evaluated in evolving text stream classification. Our experimental results show that AdaNEN consistently outperforms the existing approaches in terms of predictive performance with conservative efficiency.Item Embargo A process model for AI-enabled software development: a synthesis from validation studies in white literature(John Wiley & Sons Ltd., 2025-01) Erdogan, Tugba G.; Altunel, Haluk; Tarhan, Ayça K.**Context:** With the fast advancement of techniques in artificial intelligence (AI) and of the target infrastructures in the last decades, AI software is becoming an undeniable part of software system projects. As in most cases in history, however, development methods and guides follow the advancements in technology with phase differences. **Purpose:** With an aim to elicit and integrate available evidence from AI software development practices into a process model, this study synthesizes the contributions of the validation studies reported in scientific literature. **Method:** We applied a systematic literature review to retrieve, select, and analyze the primary studies. After a comprehensive and rigorous search and scoping review, we identified 82 studies that make various contributions in relation to AI software development practices. To increase the effectiveness of the synthesis and the usefulness of the outcome, for detailed analysis, we selected 14 primary studies (out of 82) that empirically validated their contributions. **Results:** We carefully reviewed the selected studies that validate proposals on approaches/models, methods/techniques, tasks/phases, lessons learned/best practices, or workflows. We mapped the steps/activities in these proposals with the knowledge areas in SWEBOK, and using the evidence in this mapping and the primary studies, we synthesized a process model that integrates activities, artifacts, and roles for AI-enabled software system development. **Cunclusion:** To the best of our knowledge, this is the first study that proposes such a process model by eliciting and gathering the contributions of the validation studies in a bottom-up manner. We expect that the output of this synthesis will be input for further research to validate or improve the process model.Item Open Access A quantitative style analysis of four Turkish authors: changes over time, and differences(Routledge, 2024-10-14) Yıldırım, Onur; Can, FazlıWe present a stylometric analysis of the writings of four famous Turkish authors: Abdülhak Şinasi Hisar, Refik Halid Karay, Ahmet Hamdi Tanpınar, and Halit Ziya Uşaklıgil. Our aim is to internally analyse the shifts in their writing styles and examine the differences between them. First, we evaluate the changes in word lengths in the writers’ novels over time and observe that they do not necessarily follow the pattern of writing with longer words as time passes, which was common for 20th-century Turkish literature. We then employ a sliding text window approach to capture the shift in writing styles in novels, by focusing changes in word lengths throughout the entire text. Based on this analysis, we hypothesize a relationship between changes in word lengths and meaning shift within a novel. Next, we investigate the stylochronometry and authorship attribution problems for these four authors and show that their styles change with time and that their works are distinguishable from each other. Finally, we analyse differences in their vocabulary richness within close contexts and demonstrate a strong relationship between poetic writing and lower vocabulary richness in the running text.Item Open Access A serious game approach to introduce the code review practice(John Wiley & Sons Ltd., 2025-02) Ardic, Barış; Tüzün, ErayCode review is a widely utilized practice that focuses on improving code via manual inspections. However, this practice is not addressed adequately in a typical software engineering curriculum. We aim to help address the code review practice knowledge gap between the software engineering curricula and the industry with a serious game approach. We determine our learning objectives around the introduction of the code review process. To realize these objectives, we design, build, and test the serious game. We then conduct three case studies with a total of 280 students. We evaluated the results by comparing the student's knowledge and confidence about code review before and after case studies, as well as evaluating how they performed in code review quizzes and game levels themselves. Our analysis indicates that students had a positive experience during gameplay, and an in-depth examination suggests that playing the game also enhanced their knowledge. We conclude that the game had a positive impact on introducing the code review process. This study represents a step taken toward moving code review education from industry starting positions to higher education. The game and its auxiliary materials are available online.Item Open Access A utilization based genetic algorithm for virtual machine placement in cloud systems(2024-01-15) Çavdar, Mustafa Can; Körpeoğlu, İbrahim; Ulusoy, ÖzgürDue to the increasing demand for cloud computing and related services, cloud providers need to come up with methods and mechanisms that increase the performance, availability and reliability of data centers and cloud systems. Server virtualization is a key component to achieve this, which enables sharing of resources of a single physical machine among multiple virtual machines in a totally isolated manner. Optimizing virtualization has a very significant effect on the overall performance of a cloud computing system. This requires efficient and effective placement of virtual machines into physical machines. Since this is an optimization problem that involves multiple constraints and objectives, we propose a method based on genetic algorithms to place virtual machines into physical servers of a data center. By considering the utilization of machines and node distances, our method, called Utilization Based Genetic Algorithm (UBGA), aims at reducing resource waste, network load, and energy consumption at the same time. We compared our method against several other placement methods in terms of utilization achieved, networking bandwidth consumed, and energy costs incurred, using an open-source, publicly available CloudSim simulator. The results show that our method provides better performance compared to other placement approaches.Item Open Access Abstract 207: the cBioPortal for cancer genomics(American Association for Cancer Research (AACR), 2021) Gao, J.; Mazor, T.; de Bruijn, I.; Abeshouse, A.; Baiceanu, D.; Erkoç, Z.; Gross, B.; Higgins, D.; Jagannathan, P. K.; Kalletla, K.; Kumari, Priti; Kundra, R.; Li, X.; Lindsay, J.; Lisman, A.; Lukasse, P.; Madala, D.; Madupuri, R.; Ochoa, A.; Plantalech, O.; Quach, J.; Rodenburg, S.; Satravada, A.; Schaeffer, F.; Sheridan, R.; Sikina, L.; Sümer, S. O.; Sun, Y.; van Dijk, P.; van Nierop, P.; Wang, A.; Wilson, M.; Zhang, H.; Zhao, G.; van Hagen, S.; van Bochove, K.; Doğrusöz, Uğur; Heath, A.; Resnick, A.; Pugh, T. J.; Sander, C.; Cerami, E.; Schultz, N.The cBioPortal for Cancer Genomics is an open-source software platform that enables interactive, exploratory analysis of large-scale cancer genomics data sets with a user-friendly interface. It integrates genomic and clinical data, and provides a suite of visualization and analysis options, including OncoPrint, mutation diagram, variant interpretation, survival analysis, expression correlation analysis, alteration enrichment analysis, cohort and patient-level visualization, among others.The public site (https://www.cbioportal.org) hosts data from almost 300 studies spanning individual labs and large consortia. Data is also available in the cBioPortal Datahub (https://github.com/cBioPortal/datahub/). In 2020 we added data from 21 studies, totaling almost 30,000 samples. In addition, we added data to existing TCGA PanCancer Atlas studies, including MSI status, mRNA-seq z-scores relative to normal tissue, microbiome data, and RPPA-based protein expression. The cBioPortal also supports AACR Project GENIE with a dedicated instance hosting the GENIE cohort of 112,000 clinically sequenced samples from 19 institutions worldwide (https://genie.cbioportal.org).The site is accessed by over 30,000 unique visitors per month. To support these users, we hosted a five-part instructional webinar series. Recordings of these webinars are available on our website and have already been viewed thousands of times.In addition, more than 50 instances are installed at academic institutions and pharmaceutical/biotechnology companies. In support of these local instances, we continue to simplify the installation process: we now provide a docker compose solution which includes all microservices to run the web app as well as data validation, import and migration.We continue to enhance and expand the functionality of cBioPortal. This year we significantly enhanced the group comparison feature; it is now integrated into gene-specific queries and supports comparison of more data types including DNA methylation, microbiome, and any outcome measure. We also expanded support of longitudinal data: the existing patient timeline has been refactored and now supports a wider range of data and visualizations; a new “Genomic Evolution” tab highlights changes in mutation allele frequencies across multiple samples from a patient; and samples can now be selected based on pre- or post-treatment status. Other features released this year include: allowing users to add gene-level plots for continuous molecular profiles in study view, enabling users to select the desired transcript on the Mutations tab, and integration of PathwayMapper.The cBioPortal is fully open source (https://github.com/cBioPortal/) under a GNU Affero GPL license. Development is a collaborative effort among groups at Memorial Sloan Kettering Cancer Center, Dana-Farber Cancer Institute, Children's Hospital of Philadelphia, Princess Margaret Cancer Centre, Bilkent University and The Hyve.Item Open Access Abstract metaprolog engine(Elsevier, 1998) Cicekli, I.A compiler-based meta-level system for MetaProlog language is presented. Since MetaProlog is a meta-level extension of Prolog, the Warren Abstract Machine (WAM) is extended to get an efficient implementation of meta-level facilities; this extension is called the Abstract MetaProlog Engine (AMPE). Since theories and proofs are main meta-level objects in MetaProlog, we discuss their representations and implementations in detail. First, we describe how to efficiently represent theories and derivability relations. At the same time, we present the core part of the AMPE, which supports multiple theories and a fast context switching among theories in the MetaProlog system. Then we describe how to compute proofs, how to shrink the search space of a goal using partially instantiated proofs, and how to represent other control knowledge in a WAM-based system. In addition to computing proofs that are just success branches of search trees, fail branches can also be computed and used in the reasoning process.Item Open Access Accelerating genome analysis: a primer on an ongoing journey(IEEE, 2020) Alser, M.; Zülal, Bingöl; Cali, D. S.; Kim, J.; Ghose, S.; Alkan, Can; Mutlu, OnurGenome analysis fundamentally starts with a process known as read mapping, where sequenced fragments of an organism's genome are compared against a reference genome. Read mapping is currently a major bottleneck in the entire genome analysis pipeline, because state-of-the-art genome sequencing technologies are able to sequence a genome much faster than the computational techniques employed to analyze the genome. We describe the ongoing journey in significantly improving the performance of read mapping. We explain state-of-the-art algorithmic methods and hardware-based acceleration approaches. Algorithmic approaches exploit the structure of the genome as well as the structure of the underlying hardware. Hardware-based acceleration approaches exploit specialized microarchitectures or various execution paradigms (e.g., processing inside or near memory). We conclude with the challenges of adopting these hardware-accelerated read mappers.Item Open Access Accelerating read mapping with FastHASH(BioMed Central Ltd., 2013) Xin, H.; Lee, D.; Hormozdiari, F.; Yedkar, S.; Mutlu, O.; Alkan C.With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS. We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection. We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness. © 2013 Xin et al.