Department of Computer Engineering
Permanent URI for this collection
Browse
Recent Submissions
Item Metadata only Software architecture(CRC Press, 2022-05-30) Bedir, Tekinerdoğan; Bedir, TekinerdoğanIn this chapter, we have defined an overview of software architecture design. We have focused on three basic topics including software architecture modeling, software architecture design methods, and software architecture evaluation. Software architecture modeling has evolved from informal box and line drawings to extensive architecture frameworks that include various viewpoints for modeling the architecture based on the concerns of stakeholders. Different architecture frameworks have been introduced in the literature, each with its own set of viewpoints. Earlier frameworks assumed a fixed set of viewpoints from which the architect can select the required ones to model the architecture. Recent approaches such as the V&B approach adopt an open ended approach in which new viewpoints can be designed if deemed necessary. The process of software architecture design usually starts in the early phases of the software development life cycle. Various architecture design methods have been introduced. We have identified five key activities that can be observed in most architecture design methods. The activities include Analyzing Concerns, Analyze Domain, Architecture Design, Evaluate Architecture and Architecture Realization. Once the architecture is designed, it can be evaluated based on quality criteria. The evaluation results in impact analysis report that might require refactoring the architecture to align it with the defined quality criteria.Item Open Access CONGA: Copy number variation genotyping in ancient genomes and low-coverage sequencing data(Public Library of Science, 2022-12-14) Söylev, Arda; Çokoglu, Sevim Seda; Koptekin, Dilek; Alkan, Can; Somel, Mehmet; Alkan, CanTo date, ancient genome analyses have been largely confined to the study of single nucleotide polymorphisms (SNPs). Copy number variants (CNVs) are a major contributor of disease and of evolutionary adaptation, but identifying CNVs in ancient shotgun-sequenced genomes is hampered by typical low genome coverage (<1×) and short fragments ([removed]1 kbps with F-scores >0.75 at ≥1×, and distinguish between heterozygous and homozygous states. We used CONGA to genotype 10,002 outgroup-ascertained deletions across a heterogenous set of 71 ancient human genomes spanning the last 50,000 years, produced using variable experimental protocols. A fraction of these (21/71) display divergent deletion profiles unrelated to their population origin, but attributable to technical factors such as coverage and read length. The majority of the sample (50/71), despite originating from nine different laboratories and having coverages ranging from 0.44×-26× (median 4×) and average read lengths 52-121 bps (median 69), exhibit coherent deletion frequencies. Across these 50 genomes, inter-individual genetic diversity measured using SNPs and CONGA-genotyped deletions are highly correlated. CONGA-genotyped deletions also display purifying selection signatures, as expected. CONGA thus paves the way for systematic CNV analyses in ancient genomes, despite the technical challenges posed by low and variable genome coverage. © 2022 Söylev et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Item Open Access SyBLaRS: A web service for laying out, rendering and mining biological maps in SBGN, SBML and more(Public Library of Science, 2022-11-14) Balcı, Hakan; Doğrusöz, Uğur; Özgül, Yusuf Ziya; Atayev, Perman; Balcı, Hakan; Doğrusöz, UğurVisualization is a key recurring requirement for effective analysis of relational data. Biology is no exception. It is imperative to annotate and render biological models in standard, widely accepted formats. Finding graph-theoretical properties of pathways as well as identifying certain paths or subgraphs of interest in a pathway are also essential for effective analysis of pathway data. Given the size of available biological pathway data nowadays, automatic layout is crucial in understanding the graphical representations of such data. Even though there are many available software tools that support graphical display of biological pathways in various formats, there is none available as a service for on-demand or batch processing of biological pathways for automatic layout, customized rendering and mining paths or subgraphs of interest. In addition, there are many tools with fine rendering capabilities lacking decent automatic layout support. To fill this void, we developed a web service named SyBLaRS (Systems Biology Layout and Rendering Service) for automatic layout of biological data in various standard formats as well as construction of customized images in both raster image and scalable vector formats of these maps. Some of the supported standards are more generic such as GraphML and JSON, whereas others are specialized to biology such as SBGNML (The Systems Biology Graphical Notation Markup Language) and SBML (The Systems Biology Markup Language). In addition, SyBLaRS supports calculation and highlighting of a number of wellknown graph-theoretical properties as well as some novel graph algorithms turning a specified set of objects of interest to a minimal pathway of interest. We demonstrate that SyBLaRS can be used both as an offline layout and rendering service to construct customized and annotated pictures of pathway models and as an online service to provide layout and rendering capabilities for systems biology software tools. SyBLaRS is open source and publicly available on GitHub and freely distributed under the MIT license. In addition, a sample deployment is available here for public consumption. © 2022 Balci et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Item Open Access Fast characterization of segmental duplication structure in multiple genome assemblies(BioMed Central Ltd, 2022-12) Išerić, Hamza; Alkan, Can; Hach, Faraz; Numanagić, Ibrahim; Alkan, CanMotivation: The increasing availability of high-quality genome assemblies raised interest in the characterization of genomic architecture. Major architectural elements, such as common repeats and segmental duplications (SDs), increase genome plasticity that stimulates further evolution by changing the genomic structure and inventing new genes. Optimal computation of SDs within a genome requires quadratic-time local alignment algorithms that are impractical due to the size of most genomes. Additionally, to perform evolutionary analysis, one needs to characterize SDs in multiple genomes and find relations between those SDs and unique (non-duplicated) segments in other genomes. A naïve approach consisting of multiple sequence alignment would make the optimal solution to this problem even more impractical. Thus there is a need for fast and accurate algorithms to characterize SD structure in multiple genome assemblies to better understand the evolutionary forces that shaped the genomes of today. Results: Here we introduce a new approach, BISER, to quickly detect SDs in multiple genomes and identify elementary SDs and core duplicons that drive the formation of such SDs. BISER improves earlier tools by (i) scaling the detection of SDs with low homology to multiple genomes while introducing further 7–33× speed-ups over the existing tools, and by (ii) characterizing elementary SDs and detecting core duplicons to help trace the evolutionary history of duplications to as far as 300 million years. Availability and implementation: BISER is implemented in Seq programming language and is publicly available at https://github.com/0xTCG/biser. © 2022, The Author(s).Item Open Access DeepND: Deep multitask learning of gene risk for comorbid neurodevelopmental disorders(Cell Press, 2022-07-08) Beyreli, İlayda; Karakahya, Oğuzhan; Çiçek, A. Ercüment; Beyreli, İlayda; Karakahya, Oğuzhan; Çiçek, A. ErcümentAutism spectrum disorder and intellectual disability are comorbid neurodevelopmental disorders with complex genetic architectures. Despite large-scale sequencing studies, only a fraction of the risk genes was identified for both. We present a network-based gene risk prioritization algorithm, DeepND, that performs cross-disorder analysis to improve prediction by exploiting the comorbidity of autism spectrum disorder (ASD) and intellectual disability (ID) via multitask learning. Our model leverages information from human brain gene co-expression networks using graph convolutional networks, learning which spatiotemporal neurodevelopmental windows are important for disorder etiologies and improving the state-of-the-art prediction in single- and cross-disorder settings. DeepND identifies the prefrontal and motor-somatosensory cortex (PFC-MFC) brain region and periods from early- to mid-fetal and from early childhood to young adulthood as the highest neurodevelopmental risk windows for ASD and ID. We investigate ASD- and ID-associated copy-number variation (CNV) regions and report our findings for several susceptibility gene candidates. DeepND can be generalized to analyze any combinations of comorbid disorders. © 2022 The Author(s)Item Open Access Learning robotic manipulation of natural materials with variable properties for construction tasks(Institute of Electrical and Electronics Engineers, 2022-03-15) Kalousdian, N. K.; Lochnicki, G.; Hartmann, V. N.; Leder, S.; Oğuz, Özgür S.; Menges, A.; Toussaint, M.; Oğuz, Özgür S.The introduction of robotics and machine learning to architectural construction is leading to more efficient construction practices. So far, robotic construction has largely been implemented on standardized materials, conducting simple, predictable, and repetitive tasks. We present a novel mobile robotic system and corresponding learning approach that takes a step towards assembly of natural materials with anisotropic mechanical properties for more sustainable architectural construction. Through experiments both in simulation and in the real world, we demonstrate a dynamically adjusted curriculum and randomization approach for the problem of learning manipulation tasks involving materials with biological variability, namely bamboo. Using our approach, robots are able to transport bamboo bundles and reach to goal-positions during the assembly of bamboo structures.Item Open Access Hybrid image-/data-parallel rendering using island parallelism(Institute of Electrical and Electronics Engineers, 2022-12-06) Zellmann, S.; Wald, I.; Barbosa, J.; Demirci, Serkan; Şahıstan, Alper; Güdükbay, Uğur; Demirci, Serkan; Şahıstan, Alper; Güdükbay, UğurIn parallel ray tracing, techniques fall into one of two camps: image-parallel techniques aim at increasing frame rate by replicating scene data across nodes and splitting the rendering work across different ranks, and data-parallel techniques aim at increasing the size of the model that can be rendered by splitting the model across multiple ranks, but typically cannot scale much in frame rate. We propose and evaluate a hybrid approach that combines the advantages of both by splitting a set of N x M ranks into M islands of N ranks each and using data-parallel rendering within each island and image parallelism across islands. We discuss the integration of this concept into four wildly different parallel renderers and evaluate the efficacy of this approach based on multiple different data sets.Item Open Access edaGAN: Encoder-Decoder Attention Generative Adversarial Networks for multi-contrast MR image synthesis(Institute of Electrical and Electronics Engineers, 2022-05-16) Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Çukur, Tolga; Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Çukur, TolgaMagnetic resonance imaging (MRI) is the preferred modality among radiologists in the clinic due to its superior depiction of tissue contrast. Its ability to capture different contrasts within an exam session allows it to collect additional diagnostic information. However, such multi-contrast MRI exams take a long time to scan, resulting in acquiring just a portion of the required contrasts. Consequently, synthetic multi-contrast MRI can improve subsequent radiological observations and image analysis tasks like segmentation and detection. Because of this significant potential, multi-contrast MRI synthesis approaches are gaining popularity. Recently, generative adversarial networks (GAN) have become the de facto choice for synthesis tasks in medical imaging due to their sensitivity to realism and high-frequency structures. In this study, we present a novel generative adversarial approach for multi-contrast MRI synthesis that combines the learning of deep residual convolutional networks and spatial modulation introduced by an attention gating mechanism to synthesize high-quality MR images. We show the superiority of the proposed approach against various synthesis models on multi-contrast MRI datasets.Item Open Access An analysis of relations among European countries based on UEFA European football championship(Sciendo, 2022) Duymus, Mustafa; Kokundu, Ilayda Beyreli; Kas, Miray; Duymus, Mustafa; Kokundu, Ilayda Beyreli; Kas, MirayWith the increasing globalization in the 21st century, football has become more of an industry than a sport that supports tremendous amount of money circulation. More players started to play in countries different from their original nationality. Some countries used this evolution process of football to improve the quality of their leagues. The clubs in these leagues recruited the best players from all around the world. In international football, nations are represented by their best players, and these players might come from a variety of different leagues. To observe the countries that host the best players of these nations, we analyze the trend for the nations represented in the European Football Championship. We construct social networks for the last eight tournaments from 1992 to 2020 and calculate network-level metrics for each. We find the most influential countries for each tournament and analyze the relationship between country influence and economic revenue of football in those countries. We use several clustering algorithms to pinpoint the communities in obtained social networks and discuss the relevance of our findings to cultural and historical events.Item Open Access Implications of the first complete human genome assembly(Cold Spring Harbor Laboratory Press, 2022-03-31) Alkan, Can; Carbone, Lucia; Dennis, Megan; Ernst, Jason; Evrony, Gilad; Girirajan, Santhosh; Leung, Danny Chi Yeu; Cheng, Clooney C.Y.; MacAlpine, David; Ni, Ting; Ramsay, Michèle; Rowe, Helen; Alkan, CanItem Open Access Polishing copy number variant calls on exome sequencing data via deep learning(NLM (Medline), 2022-06-13) Özden, Furkan; Alkan, Can; Çiçek, A. Ercüment; Özden, Furkan; Alkan, Can; Çiçek, A. ErcümentAccurate and efficient detection of copy number variants (CNVs) is of critical importance owing to their significant association with complex genetic diseases. Although algorithms that use whole-genome sequencing (WGS) data provide stable results with mostly valid statistical assumptions, copy number detection on whole-exome sequencing (WES) data shows comparatively lower accuracy. This is unfortunate as WES data are cost-efficient, compact, and relatively ubiquitous. The bottleneck is primarily due to the noncontiguous nature of the targeted capture: biases in targeted genomic hybridization, GC content, targeting probes, and sample batching during sequencing. Here, we present a novel deep learning model, DECoNT, which uses the matched WES and WGS data, and learns to correct the copy number variations reported by any off-the-shelf WES-based germline CNV caller. We train DECoNT on the 1000 Genomes Project data, and we show that we can efficiently triple the duplication call precision and double the deletion call precision of the state-of-the-art algorithms. We also show that our model consistently improves the performance independent of (1) sequencing technology, (2) exome capture kit, and (3) CNV caller. Using DECoNT as a universal exome CNV call polisher has the potential to improve the reliability of germline CNV detection on WES data sets. © 2022 Özden et al.; Published by Cold Spring Harbor Laboratory Press.Item Open Access Long-horizon multi-robot rearrangement planning for construction assembly(Institute of Electrical and Electronics Engineers, 2022-08-26) Hartmann, V.N.; Orthey, A.; Driess, D.; Oğuz, Özgür S.; Toussaint, M.; Oğuz, Özgür S.Robotic construction assembly planning aims to find feasible assembly sequences as well as the corresponding robot-paths and can be seen as a special case of task and motion planning (TAMP). As construction assembly can well be parallelized, it is desirable to plan for multiple robots acting concurrently. Solving TAMP instances with many robots and over a long time-horizon is challenging due to coordination constraints, and the difficulty of choosing the right task assignment. We present a planning system which enables parallelization of complex task and motion planning problems by iteratively solving smaller subproblems. Combining optimization methods to jointly solve for manipulation constraints with a sampling-based bi-directional space-time path planner enables us to plan cooperative multi-robot manipulation with unknown arrival-times. Thus, our solver allows for completing subproblems and tasks with differing timescales and synchronizes them effectively. We demonstrate the approach on multiple construction case-studies to show the robustness over long planning horizons and scalability to many objects and agents. Finally, we also demonstrate the execution of the computed plans on two robot arms to showcase the feasibility in the real world.Item Open Access A unifying network modeling approach for codon optimization(Oxford University Press, 2022-06-28) Karaşan, Oya; Şen, Alper; Tiryaki, Banu; Çiçek, A. Ercüment; Karaşan, Oya; Şen, Alper; Tiryaki, Banu; Çiçek, A. ErcümentMotivation: Synthesizing genes to be expressed in other organisms is an essential tool in biotechnology. While the many-to-one mapping from codons to amino acids makes the genetic code degenerate, codon usage in a particular organism is not random either. This bias in codon use may have a remarkable effect on the level of gene expression. A number of measures have been developed to quantify a given codon sequence’s strength to express a gene in a host organism. Codon optimization aims to find a codon sequence that will optimize one or more of these measures. Efficient computational approaches are needed since the possible number of codon sequences grows exponentially as the number of amino acids increases. Results: We develop a unifying modeling approach for codon optimization. With our mathematical formulations based on graph/network representations of amino acid sequences, any combination of measures can be optimized in the same framework by finding a path satisfying additional limitations in an acyclic layered network. We tested our approach on bi-objectives commonly used in the literature, namely, Codon Pair Bias versus Codon Adaptation Index and Relative Codon Pair Bias versus Relative Codon Bias. However, our framework is general enough to handle any number of objectives concurrently with certain restrictions or preferences on the use of specific nucleotide sequences. We implemented our models using Python’s Gurobi interface and showed the efficacy of our approach even for the largest proteins available. We also provided experimentation showing that highly expressed genes have objective values close to the optimized values in the bi-objective codon design problem.Item Open Access Targeted metabolomics analyses for brain tumor margin assessment during surgery(Oxford University Press, 2022-06-15) Çakmakçı, D.; Kaynar, Gün; Bund, C.; Piotto, M.; Proust, F.; Namer, I. J.; Çiçek, A. Ercüment; Kaynar, Gün; Çiçek, A. ErcümentMotivation: Identification and removal of micro-scale residual tumor tissue during brain tumor surgery are key for survival in glioma patients. For this goal, High-Resolution Magic Angle Spinning Nuclear Magnetic Resonance (HRMAS NMR) spectroscopy-based assessment of tumor margins during surgery has been an effective method. However, the time required for metabolite quantification and the need for human experts such as a pathologist to be present during surgery are major bottlenecks of this technique. While machine learning techniques that analyze the NMR spectrum in an untargeted manner (i.e. using the full raw signal) have been shown to effectively automate this feedback mechanism, high dimensional and noisy structure of the NMR signal limits the attained performance. Results: In this study, we show that identifying informative regions in the HRMAS NMR spectrum and using them for tumor margin assessment improves the prediction power. We use the spectra normalized with the ERETIC (electronic reference to access in vivo concentrations) method which uses an external reference signal to calibrate the HRMAS NMR spectrum. We train models to predict quantities of metabolites from annotated regions of this spectrum. Using these predictions for tumor margin assessment provides performance improvements up to 4.6% the Area Under the ROC Curve (AUC-ROC) and 2.8% the Area Under the Precision-Recall Curve (AUC-PR). We validate the importance of various tumor biomarkers and identify a novel region between 7.97 ppm and 8.09 ppm as a new candidate for a glioma biomarker. Availability and implementation: The code is released at https://github.com/ciceklab/targeted_brain_tumor_margin_ assessment. The data underlying this article are available in Zenodo, at https://doi.org/10.5281/zenodo.5781769.Item Open Access Uncovering complementary sets of variants for predicting quantitative phenotypes(Oxford University Press, 2021-12-02) Yılmaz, S.; Fakhouri, Mohamad; Koyutürk, M.; Çiçek, A. E.; Taştan, Ö.; Fakhouri, MohamadMotivation: Genome-wide association studies show that variants in individual genomic loci alone are not sufficient to explain the heritability of complex, quantitative phenotypes. Many computational methods have been developed to address this issue by considering subsets of loci that can collectively predict the phenotype. This problem can be considered a challenging instance of feature selection in which the number of dimensions (loci that are screened) is much larger than the number of samples. While currently available methods can achieve decent phenotype prediction performance, they either do not scale to large datasets or have parameters that require extensive tuning. Results: We propose a fast and simple algorithm, Macarons, to select a small, complementary subset of variants by avoiding redundant pairs that are likely to be in linkage disequilibrium. Our method features two interpretable parameters that control the time/performance trade-off without requiring parameter tuning. In our computational experiments, we show that Macarons consistently achieves similar or better prediction performance than state-ofthe-art selection methods while having a simpler premise and being at least two orders of magnitude faster. Overall, Macarons can seamlessly scale to the human genome with 107 variants in a matter of minutes while taking the dependencies between the variants into account. Availabilityand implementation: Macarons is available in Matlab and Python at https://github.com/serhan-yilmaz/macarons.Item Open Access SeGraM: A universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping(Association for Computing Machinery, 2020-06-11) Cali, D.Ş; Kanellopoulos, K.; Lindegger, J.; Bingöl, Zülal; Kalsi, G.S.; Zuo, Z.; Fırtına, Can; Cavlak, M.B.; Kim, J.; Ghiasi, N.M.; Singh, G.; Gómez-Luna, J.; Almadhoun Alserr, N.; Alser, M.; Subramoney, S.; Alkan, Can; Ghose, S.; Mutlu, O.; Bingöl, Zülal; Fırtına, Can; Alkan, CanA critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. Since sequence-to-sequence mapping can be treated as a special case of sequence-to-graph mapping, we aim to design an accelerator that is efficient for both linear and graph-based read mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both sequence-to-graph mapping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first minimizer-based seeding accelerator, which finds the candidate locations in a given genome graph; and (2) BitAlign, the first bitvector-based sequence-to-graph alignment accelerator, which performs alignment between a given read and the subgraph identified by MinSeed. We couple SeGraM with high-bandwidth memory to exploit low latency and highly-parallel memory access, which alleviates the memory bottleneck. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph (i.e., S2G) and sequence-to-sequence (i.e., S2S) mapping pipelines. First, SeGraM outperforms state-of-the-art S2G mapping tools by 5.9×/3.9× and 106×/- 742× for long and short reads, respectively, while reducing power consumption by 4.1×/4.4× and 3.0×/3.2×. Second, BitAlign outperforms a state-of-the-art S2G alignment tool by 41×-539× and three S2S alignment accelerators by 1.2×-4.8×. We conclude that SeGraM is a high-performance and low-cost universal genomics mapping accelerator that efficiently supports both sequence-to-graph and sequence-to-sequence mapping pipelines.Item Unknown UnSplit: Data-Oblivious model inversion, model stealing, and label inference attacks against split learning(Association for Computing Machinery, 2022-11-07) Erdoǧan, Ege; Küpçü, Alptekin; Çiçek, A. Ercüment; Çiçek, A. ErcümentTraining deep neural networks often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning aims to address this concern by distributing the model among a client and a server. The scheme supposedly provides privacy, since the server cannot see the clients' models and inputs. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected. (2) We show that if the client keeps hidden only the output layer of the model to ''protect'' the private labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using various benchmark datasets and against proposed privacy-enhancing extensions to split learning. Our results show that plaintext split learning can pose serious risks, ranging from data (input) privacy to intellectual property (model parameters), and provide no more than a false sense of security. © 2022 Owner/Author.Item Unknown SplitGuard: Detecting and mitigating training-hijacking attacks in split learning(Association for Computing MachineryNew YorkNYUnited States, 2022-11-07) Erdogan, Ege; Küpçü, Alptekin; Çiçek, A. Ercüment; Çiçek, A. ErcümentDistributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders. Split learning, in particular, achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning any task of its choice, e.g. towards outputting easily invertible values. With a concrete example already proposed (Pasquini et al., CCS '21), such training-hijacking attacks present a significant risk for the data privacy of split learning clients. In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate our method's effectiveness, compare it with potential alternatives, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries. © 2022 Owner/Author.Item Unknown Proposing new routing protocol based on chaos algorithm(Conscientia Beam, 2022-05-09) Majdi, Ali; Majdi, AliIn MANET, the multicast routing is considered a non-deterministic polynomial (NP) complexity, it contains assorted objectives and restrictions. In the multicast issue of MANET, the quality of services (QoS) based upon cost, delay, jitter, bandwidth, are constantly deemed as multi-objective for directing multicast routing protocols (MR). Conversely, a mobile node has finite battery energy and the lifetime of a network depends on its mobile node battery energy. Here, the MR problem of MANET has five objectives vis-a-vis the optimization of cost, delay, jitter, bandwidth, and network lifetime with the help of Chaotic-CSA-ROA. Here, evaluation metrics, via delay, delivery ratio, drop, Network lifespan, overhead and throughput are analyzed with node, rate, and speed. The proposed QOS-MRP-CSROA-MANET provides higher throughput at the node between 32.9496% and 65.5839%, and higher throughput in rate as 16.6049% and 30.4654%, higher throughput in speed as 10.1298% and 7.0825%, low packet drop in node as 63.7313% and 52.2255%, low packet drop in rate as 51.5528% and 25.6220%, low packet drop in speed as 18.0857% and 24.5953% compared with existing methods, like QoS aware of multicast routing protocol using particle swarm optimization algorithm in MANET (QOS-MRP-PSOA-MANET) and QoS aware of multicast routing protocol using a genetic algorithm in MANET (QOS-MRP-GA-MANET) respectively.Item Open Access Binary transformation method for multi-label stream classification(Association for Computing Machinery, 2022-10-17) Gülcan, Ege Berkay; Ecevit, Işın Su; Can, Fazlı; Gülcan, Ege Berkay; Ecevit, Işın Su; Can, FazlıData streams produce extensive data with high throughput from various domains and require copious amounts of computational resources and energy. Many data streams are generated as multi-labeled and classifying this data is computationally demanding. Some of the most well-known methods for Multi-Label Stream Classification are Problem Transformation schemes; however, previous work on this area does not satisfy the efficiency demands of multi-label data streams. In this study, we propose a novel Problem Transformation method for Multi-Label Stream Classification called Binary Transformation, which utilizes regression algorithms by transforming the labels into a continuous value. We compare our method against three of the leading problem transformation methods using eight datasets. Our results show that Binary Transformation achieves statistically similar effectiveness and provides a much higher level of efficiency.