Browsing by Author "Nouri, Azita"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Can you really anonymize the donors of genomic data in today’s digital world?(Springer, 2016-09) Alser, Mohammed; Almadhoun, Nour; Nouri, Azita; Alkan, Can; Ayday, ErmanThe rapid progress in genome sequencing technologies leads to availability of high amounts of genomic data. Accelerating the pace of biomedical breakthroughs and discoveries necessitates not only collecting millions of genetic samples but also granting open access to genetic databases. However, one growing concern is the ability to protect the privacy of sensitive information and its owner. In this work, we survey a wide spectrum of cross-layer privacy breaching strategies to human genomic data (using both public genomic databases and other public non-genomic data). We outline the principles and outcomes of each technique, and assess its technological complexity and maturation. We then review potential privacy-preserving countermeasure mechanisms for each threat. © Springer International Publishing Switzerland 2016.Item Open Access Read mapping methods optimized for multiple GPGPU(2016-07) Nouri, AzitaDNA sequence alignment problem can be broadly defined as the character-level comparison of DNA sequences obtained from one or more samples against a database of reference (i.e., consensus) genome sequence of the same or a similar species. High throughput sequencing (HTS) technologies were introduced in 2006, and the latest iterations of HTS technologies are able to read the genome of a human individual in just three days for a cost of ~ $1,000. With HTS technologies we may encounter massive amount of reads available in different size and they also present a computational problem since the analysis of the HTS data requires the comparison of >1 billion short (100 characters, or base pairs) \reads" against a very long (3 billion base pairs) reference genome. Since DNA molecules are composed of two opposing strands (i.e. two complementary strings), the number of required comparisons are doubled. It is therefore present a diffcult and important challenge of mapping in terms of execution time and scalability with this volume of different-size short reads. Instead of calculating billions of local alignment of short vs long sequences using a quadratic-time algorithm, heuristics are applied to speed up the process. First, partial sequence matches, called \seeds", are quickly found using either Burrows Wheeler Transform (BWT) followed with Ferragina-Manzini Index (FM), or a simple hash table. Next, the candidate locations are verified using a dynamic programming alignment algorithm that calculates Levenshtein edit distance (mismatches, insertions, deletions different from reference), which runs in quadratic time. Although these heuristics are substantially faster than local alignment, because of the repetitive nature of the human genome, they often require hundreds of verification runs per read, imposing a heavy computational burden. However, all of these billions of alignments are independent from each other, thus the read mapping problem presents itself as embarrassingly parallel. In this thesis we propose novel algorithms that are optimized for multiple graphic processing units (GPGPUs) to accelerate the read mapping procedure beyond the capabilities of algorithmic improvements that only use CPUs. We distribute the read mapping workload into the massively parallel architecture of GPGPUs to performing millions of alignments simultaneously, using single or many GPGPUs, together with multi-core CPUs. Our aim is to reduce the need for large scale clusters or cloud platforms to a single server with advanced parallel processing units.