Department of Computer Engineering
Permanent URI for this communityhttps://hdl.handle.net/11693/115574
Browse
Browsing Department of Computer Engineering by Title
Now showing 1 - 20 of 1623
- Results Per Page
- Sort Options
Item Open Access 1.5D parallel sparse matrix-vector multiply(Society for Industrial and Applied Mathematics, 2018) Kayaaslan, E.; Aykanat, Cevdet; Uçar, B.There are three common parallel sparse matrix-vector multiply algorithms: 1D row-parallel, 1D column-parallel, and 2D row-column-parallel. The 1D parallel algorithms offer the advantage of having only one communication phase. On the other hand, the 2D parallel algorithm is more scalable, but it suffers from two communication phases. Here, we introduce a novel concept of heterogeneous messages where a heterogeneous message may contain both input-vector entries and partially computed output-vector entries. This concept not only leads to a decreased number of messages but also enables fusing the input- and output-communication phases into a single phase. These findings are exploited to propose a 1.5D parallel sparse matrix-vector multiply algorithm which is called local row-column-parallel. This proposed algorithm requires a constrained fine-grain partitioning in which each fine-grain task is assigned to the processor that contains either its input-vector entry, its output-vector entry, or both. We propose two methods to carry out the constrained fine-grain partitioning. We conduct our experiments on a large set of test matrices to evaluate the partitioning qualities and partitioning times of these proposed 1.5D methods.Item Open Access 2010 IAPR workshop on pattern recognition in remote sensing, PRRS 2010: preface(2010) Aksoy, S.; Younan, N. H.; Forstner, W.Item Open Access 3D Hair sketching for real-time dynamic & key frame animations(Springer, 2008-07) Aras, R.; Başarankut, B.; Çapın, T.; Özgüç, B.Physically based simulation of human hair is a well studied and well known problem. But the "pure" physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to "control" the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time. © 2008 Springer-Verlag.Item Open Access 3D human pose search using oriented cylinders(IEEE, 2009-09-10) Pehlivan, Selen; Duygulu, PınarIn this study, we present a representation based on a new 3D search technique for volumetric human poses which is then used to recognize actions in three dimensional video sequences. We generate a set of cylinder like 3D kernels in various sizes and orientations. These kernels are searched over 3D volumes to find high response regions. The distribution of these responses are then used to represent a 3D pose. We use the proposed representation for (i) pose retrieval using Nearest Neighbor (NN) based classification and Support Vector Machine (SVM) based classification methods, and for (ii) action recognition on a set of actions using Dynamic Time Warping (DTW) and Hidden Markov Model (HMM) based classification methods. Evaluations on IXMAS dataset supports the effectiveness of such a robust pose representation. ©2009 IEEE.Item Open Access 3D model compression using connectivity-guided adaptive wavelet transform built into 2D SPIHT(Academic Press, 2010-01) Köse K.; Çetin, A. Enis; Güdükbay, Uğur; Onural, L.Connectivity-Guided Adaptive Wavelet Transform based mesh compression framework is proposed. The transformation uses the connectivity information of the 3D model to exploit the inter-pixel correlations. Orthographic projection is used for converting the 3D mesh into a 2D image-like representation. The proposed conversion method does not change the connectivity among the vertices of the 3D model. There is a correlation between the pixels of the composed image due to the connectivity of the 3D mesh. The proposed wavelet transform uses an adaptive predictor that exploits the connectivity information of the 3D model. Known image compression tools cannot take advantage of the correlations between the samples. The wavelet transformed data is then encoded using a zero-tree wavelet based method. Since the encoder creates a hierarchical bitstream, the proposed technique is a progressive mesh compression technique. Experimental results show that the proposed method has a better rate distortion performance than MPEG-3DGC/MPEG-4 mesh coder.Item Open Access 3D thumbnails for 3D videos with depth(IGI Global, 2011) Yigit, Y.; Isler, S. F.; Capin, T.In this chapter, we present a new thumbnail format for 3D videos with depth, 3D thumbnail, which helps users to understand the content by preserving the recognizable features and qualities of 3D videos. The current thumbnail solutions do not give the general idea of the content and are not illustrative. In spite of the existence of 3D media content databases, there is no thumbnail representation for 3D contents. Thus, we propose a framework that generates 3D thumbnails from layered depth video (LDV) and video plus depth (V+D) by using two different methodologies on importance maps: Saliency-depth and layer based approaches. Finally, several experiments are presented that indicate 3D thumbnails are illustrative. © 2012, IGI Global.Item Open Access 3D thumbnails for mobile media browser interface with autostereoscopic displays(Springer, 2010-01) Gündoğdu, R. Bertan; Yiğit, Yeliz; Çapin, TolgaIn this paper, we focus on the problem of how to visualize and browse 3D videos and 3D images in a media browser application, running on a 3D-enabled mobile device with an autostereoscopic display. We propose a 3D thumbnail representation format and an algorithm for automatic 3D thumbnail generation from a 3D video + depth content. Then, we present different 3D user interface layout schemes for 3D thumbnails, and discuss these layouts with the focus on their usability and ergonomics. © 2010 Springer-Verlag Berlin Heidelberg.Item Open Access 3DTV-conference: the true vision-capture, transmission and display of 3D video, 3DTV-CON 2008 proceedings: preface(2008) Güdükbay, U.; Alatan, A. A.Item Open Access 60 GHz wireless data center networks: A survey(Elsevier BV * North-Holland, 2021-02-11) Terzi, Çağlar; Körpeoğlu, İbrahimData centers (DCs) became an important part of computing today. A lot of services in Internet are run on DCs. Meanwhile a lot of research is done to tackle the challenges of high-performance and energy-efficient data center networking (DCN). Hot node congestion, cabling complexity/cost, and cooling cost are some of the important issues about data centers that need further investigation. Static and rigid topology in wired DCNs is an other issue that hinders flexibility. Use of wireless links for DCNs to eliminate these disadvantages is proposed and is an important research topic. In this paper, we review research studies in literature about the design of radio frequency (RF) based wireless data center networks. RF wireless DCNs can be grouped into two as hybrid (wireless and wired) and completely wireless data centers. We investigate both. We also compare wireless DCN solutions in the literature with respect to various aspects. Open areas and research ideas are also discussed.Item Open Access A broad ensemble learning system for drifting stream classification(Institute of Electrical and Electronics Engineers, 2023-08-21) Bakhshi, Sepehr; Ghahramanian, Pouya; Bonab, H.; Can, FazlıIn a data stream environment, classification models must effectively and efficiently handle concept drift. Ensemble methods are widely used for this purpose; however, the ones available in the literature either use a large data chunk to update the model or learn the data one by one. In the former, the model may miss the changes in the data distribution, while in the latter, the model may suffer from inefficiency and instability. To address these issues, we introduce a novel ensemble approach based on the Broad Learning System (BLS), where mini chunks are used at each update. BLS is an effective lightweight neural architecture recently developed for incremental learning. Although it is fast, it requires huge data chunks for effective updates and is unable to handle dynamic changes observed in data streams. Our proposed approach, named Broad Ensemble Learning System (BELS), uses a novel updating method that significantly improves best-in class model accuracy. It employs an ensemble of output layers to address the limitations of BLS and handle drifts. Our model tracks the changes in the accuracy of the ensemble components and reacts to these changes. We present our mathematical derivation of BELS, perform comprehensive experiments with 35 datasets that demonstrate the adaptability of our model to various drift types, and provide its hyperparameter, ablation, and imbalanced dataset performance analysis. The experimental results show that the proposed approach outperforms 10 state-of-the-art baselines, and supplies an overall improvement of 18.59% in terms of average prequential accuracy.Item Open Access A guide for developing comprehensive systems biology maps of disease mechanisms: planning, construction and maintenance(Frontiers Media S.A., 2023-06-22) Mazein, A.; Acencio, M. L.; Balaur, I.; Rougny, A.; Welter, D.; Niarakis, A.; Ramirez Ardila, D.; Doğrusöz, Uğur; Gawron, P.; Satagopam, V.; Gu, W.; Kremer, A.; Schneider, R.; Ostaszewski, M.As a conceptual model of disease mechanisms, a disease map integrates available knowledge and is applied for data interpretation, predictions and hypothesis generation. It is possible to model disease mechanisms on different levels of granularity and adjust the approach to the goals of a particular project. This rich environment together with requirements for high-quality network reconstruction makes it challenging for new curators and groups to be quickly introduced to the development methods. In this review, we offer a step-by-step guide for developing a disease map within its mainstream pipeline that involves using the CellDesigner tool for creating and editing diagrams and the MINERVA Platform for online visualisation and exploration. We also describe how the Neo4j graph database environment can be used for managing and querying efficiently such a resource. For assessing the interoperability and reproducibility we apply FAIR principles.Item Open Access A utilization based genetic algorithm for virtual machine placement in cloud systems(2024-01-15) Çavdar, Mustafa Can; Körpeoğlu, İbrahim; Ulusoy, ÖzgürDue to the increasing demand for cloud computing and related services, cloud providers need to come up with methods and mechanisms that increase the performance, availability and reliability of data centers and cloud systems. Server virtualization is a key component to achieve this, which enables sharing of resources of a single physical machine among multiple virtual machines in a totally isolated manner. Optimizing virtualization has a very significant effect on the overall performance of a cloud computing system. This requires efficient and effective placement of virtual machines into physical machines. Since this is an optimization problem that involves multiple constraints and objectives, we propose a method based on genetic algorithms to place virtual machines into physical servers of a data center. By considering the utilization of machines and node distances, our method, called Utilization Based Genetic Algorithm (UBGA), aims at reducing resource waste, network load, and energy consumption at the same time. We compared our method against several other placement methods in terms of utilization achieved, networking bandwidth consumed, and energy costs incurred, using an open-source, publicly available CloudSim simulator. The results show that our method provides better performance compared to other placement approaches.Item Open Access Abstract 207: the cBioPortal for cancer genomics(American Association for Cancer Research (AACR), 2021) Gao, J.; Mazor, T.; de Bruijn, I.; Abeshouse, A.; Baiceanu, D.; Erkoç, Z.; Gross, B.; Higgins, D.; Jagannathan, P. K.; Kalletla, K.; Kumari, Priti; Kundra, R.; Li, X.; Lindsay, J.; Lisman, A.; Lukasse, P.; Madala, D.; Madupuri, R.; Ochoa, A.; Plantalech, O.; Quach, J.; Rodenburg, S.; Satravada, A.; Schaeffer, F.; Sheridan, R.; Sikina, L.; Sümer, S. O.; Sun, Y.; van Dijk, P.; van Nierop, P.; Wang, A.; Wilson, M.; Zhang, H.; Zhao, G.; van Hagen, S.; van Bochove, K.; Doğrusöz, Uğur; Heath, A.; Resnick, A.; Pugh, T. J.; Sander, C.; Cerami, E.; Schultz, N.The cBioPortal for Cancer Genomics is an open-source software platform that enables interactive, exploratory analysis of large-scale cancer genomics data sets with a user-friendly interface. It integrates genomic and clinical data, and provides a suite of visualization and analysis options, including OncoPrint, mutation diagram, variant interpretation, survival analysis, expression correlation analysis, alteration enrichment analysis, cohort and patient-level visualization, among others.The public site (https://www.cbioportal.org) hosts data from almost 300 studies spanning individual labs and large consortia. Data is also available in the cBioPortal Datahub (https://github.com/cBioPortal/datahub/). In 2020 we added data from 21 studies, totaling almost 30,000 samples. In addition, we added data to existing TCGA PanCancer Atlas studies, including MSI status, mRNA-seq z-scores relative to normal tissue, microbiome data, and RPPA-based protein expression. The cBioPortal also supports AACR Project GENIE with a dedicated instance hosting the GENIE cohort of 112,000 clinically sequenced samples from 19 institutions worldwide (https://genie.cbioportal.org).The site is accessed by over 30,000 unique visitors per month. To support these users, we hosted a five-part instructional webinar series. Recordings of these webinars are available on our website and have already been viewed thousands of times.In addition, more than 50 instances are installed at academic institutions and pharmaceutical/biotechnology companies. In support of these local instances, we continue to simplify the installation process: we now provide a docker compose solution which includes all microservices to run the web app as well as data validation, import and migration.We continue to enhance and expand the functionality of cBioPortal. This year we significantly enhanced the group comparison feature; it is now integrated into gene-specific queries and supports comparison of more data types including DNA methylation, microbiome, and any outcome measure. We also expanded support of longitudinal data: the existing patient timeline has been refactored and now supports a wider range of data and visualizations; a new “Genomic Evolution” tab highlights changes in mutation allele frequencies across multiple samples from a patient; and samples can now be selected based on pre- or post-treatment status. Other features released this year include: allowing users to add gene-level plots for continuous molecular profiles in study view, enabling users to select the desired transcript on the Mutations tab, and integration of PathwayMapper.The cBioPortal is fully open source (https://github.com/cBioPortal/) under a GNU Affero GPL license. Development is a collaborative effort among groups at Memorial Sloan Kettering Cancer Center, Dana-Farber Cancer Institute, Children's Hospital of Philadelphia, Princess Margaret Cancer Centre, Bilkent University and The Hyve.Item Open Access Abstract metaprolog engine(Elsevier, 1998) Cicekli, I.A compiler-based meta-level system for MetaProlog language is presented. Since MetaProlog is a meta-level extension of Prolog, the Warren Abstract Machine (WAM) is extended to get an efficient implementation of meta-level facilities; this extension is called the Abstract MetaProlog Engine (AMPE). Since theories and proofs are main meta-level objects in MetaProlog, we discuss their representations and implementations in detail. First, we describe how to efficiently represent theories and derivability relations. At the same time, we present the core part of the AMPE, which supports multiple theories and a fast context switching among theories in the MetaProlog system. Then we describe how to compute proofs, how to shrink the search space of a goal using partially instantiated proofs, and how to represent other control knowledge in a WAM-based system. In addition to computing proofs that are just success branches of search trees, fail branches can also be computed and used in the reasoning process.Item Open Access Accelerating genome analysis: a primer on an ongoing journey(IEEE, 2020) Alser, M.; Zülal, Bingöl; Cali, D. S.; Kim, J.; Ghose, S.; Alkan, Can; Mutlu, OnurGenome analysis fundamentally starts with a process known as read mapping, where sequenced fragments of an organism's genome are compared against a reference genome. Read mapping is currently a major bottleneck in the entire genome analysis pipeline, because state-of-the-art genome sequencing technologies are able to sequence a genome much faster than the computational techniques employed to analyze the genome. We describe the ongoing journey in significantly improving the performance of read mapping. We explain state-of-the-art algorithmic methods and hardware-based acceleration approaches. Algorithmic approaches exploit the structure of the genome as well as the structure of the underlying hardware. Hardware-based acceleration approaches exploit specialized microarchitectures or various execution paradigms (e.g., processing inside or near memory). We conclude with the challenges of adopting these hardware-accelerated read mappers.Item Open Access Accelerating read mapping with FastHASH(BioMed Central Ltd., 2013) Xin, H.; Lee, D.; Hormozdiari, F.; Yedkar, S.; Mutlu, O.; Alkan C.With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS. We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection. We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness. © 2013 Xin et al.Item Open Access Accelerating the HyperLogLog cardinality estimation algorithm(Hindawi Limited, 2017) Bozkus, C.; Fraguela, B. B.In recent years, vast amounts of data of different kinds, from pictures and videos from our cameras to software logs from sensor networks and Internet routers operating day and night, are being generated. This has led to new big data problems, which require new algorithms to handle these large volumes of data and as a result are very computationally demanding because of the volumes to process. In this paper, we parallelize one of these new algorithms, namely, the HyperLogLog algorithm, which estimates the number of different items in a large data set with minimal memory usage, as it lowers the typical memory usage of this type of calculation from O(n) to O(1). We have implemented parallelizations based on OpenMP and OpenCL and evaluated them in a standard multicore system, an Intel Xeon Phi, and two GPUs from different vendors. The results obtained in our experiments, in which we reach a speedup of 88.6 with respect to an optimized sequential implementation, are very positive, particularly taking into account the need to run this kind of algorithm on large amounts of data. © 2017 Cem Bozkus and Basilio B. Fraguela.Item Open Access Access pattern-based code compression for memory-constrained systems(Association for Computing Machinery, 2008-09) Ozturk, O.; Kandemir, M.; Chen, G.As compared to a large spectrum of performance optimizations, relatively less effort has been dedicated to optimize other aspects of embedded applications such as memory space requirements, power, real-time predictability, and reliability. In particular, many modern embedded systems operate under tight memory space constraints. One way of addressing this constraint is to compress executable code and data as much as possible. While researchers on code compression have studied efficient hardware and software based code compression strategies, many of these techniques do not take application behavior into account; that is, the same compression/decompression strategy is used irrespective of the application being optimized. This article presents an application-sensitive code compression strategy based on control flow graph (CFG) representation of the embedded program. The idea is to start with a memory image wherein all basic blocks of the application are compressed, and decompress only the blocks that are predicted to be needed in the near future. When the current access to a basic block is over, our approach also decides the point at which the block could be compressed. We propose and evaluate several compression and decompression strategies that try to reduce memory requirements without excessively increasing the original instruction cycle counts. Some of our strategies make use of profile data, whereas others are fully automatic. Our experimental evaluation using seven applications from the MediaBench suite and three large embedded applications reveals that the proposed code compression strategy is very successful in practice. Our results also indicate that working at a basic block granularity, as opposed to a procedure granularity, is important for maximizing memory space savings. © 2008 ACM.Item Open Access ACMICS: an agent communication model for interacting crowd simulation(Springer, 2017) Kullu, K.; Güdükbay, Uğur; Manocha, D.Behavioral plausibility is one of the major aims of crowd simulation research. We present a novel approach that simulates communication between the agents and assess its influence on overall crowd behavior. Our formulation uses a communication model that tends to simulate human-like communication capability. The underlying formulation is based on a message structure that corresponds to a simplified version of Foundation for Intelligent Physical Agents Agent Communication Language Message Structure Specification. Our algorithm distinguishes between low- and high-level communication tasks so that ACMICS can be easily extended and employed in new simulation scenarios. We highlight the performance of our communication model on different crowd simulation scenarios. We also extend our approach to model evacuation behavior in unknown environments. Overall, our communication model has a small runtime overhead and can be used for interactive simulation with tens or hundreds of agents. © 2017, The Author(s).Item Open Access ACMICS: An agent communication model for interacting crowd simulation: JAAMAS track(International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2018) Kullu, K.; Güdükbay, Uğur; Manocha, D.We present and evaluate a novel approach to simulate communication between the agents. Our approach distinguishes low- And high-level communication tasks. This separation makes it easy to extend and use it in new scenarios. We highlight the benefits of our approach using different simulation scenarios consisting of hun-dreds of agents. We also model evacuation behavior in unknown environments and highlight the benefits of our approach particularly in simulating such behavior.