Graduate School of Engineering and Science
Permanent URI for this collectionhttps://hdl.handle.net/11693/115678
Browse
Recent Submissions
Item Embargo Pairwise whole genome alignment using locally consistent parsing(2026-01) İlgün, EcemPairwise whole-genome alignment is a fundamental problem in computational biology, with applications in evolutionary analysis, variant discovery and comparative genomics. This work focuses on the massive scaling challenges in pangenome analysis by using a hierarchical sketching method based on Locally Consistent Parsing (LCP). On a scale of billions of base pairs, efficient alignment typically relies on the seed-chain-extend heuristic: find exact-matching sketches (seeds), chain them co-linearly, and extend into the gaps. Established tools use minimizers or maximal unique matches (MUMs); we instead use LCP cores, which offer complete coverage, consistent spacing, and fewer seeds at higher levels. Distributed and parallelized multiple genome alignment relies on efficiently partitioning the input genomes into smaller segments that can be processed independently. Existing partitioning methods often rely on maximal exact matches (MEMs), maximal unique matches (MUMs), or minimizers for sketching. However, for MEMs/MUMs, the alignment process is complicated by the O(m· log n) time required to find MEMs of size m in a string of size n. Similarly, minimizers exhibit drawbacks in their distribution patterns and frequencies due to their short length, leading to suboptimal partitioning in terms of computational and communication overhead. Compared to minimizers, Locally Consistent Parsing (LCP) can offer a more thorough and condensed representation of the input data by identifying “cores,” or brief genomic sequences that are consistently present across genomes. We develop a fast, parallelizable pairwise genome alignment framework that uses a hierarchical seed-chain-extend strategy: seed at one LCP level, chain and merge matches, find unaligned regions, and, for each region, recurse by seeding only that region at the next lower level until a minimum level is reached. LCP cores can be computed hierarchically in linear time, leading to more balanced computational loads. We integrated LCPtools with the ChainX-LCP chaining algorithm and evaluated on E. coli (K-12 vs Sakai) and human (GRCh38 vs CHM13); on the human genome our seeding completed in 68 h while Mumemto was still running after 540 h, demonstrating scalability for reference-grade assemblies.Item Open Access Controllable diffusion-based visual editing(2026-02) Ekin, YiğitAdvancements in generative networks have significantly improved visual generation, particularly for image and video editing applications. However, key challenges remain in achieving controllable editing. Diffusion inpainting models often hallucinate or re-insert the intended object during object removal, and text-tovideo diffusion models struggle to follow a desired motion pattern without sacrificing prompt alignment for motion conditioned generation. This thesis addresses these gaps through two interconnected studies. First, we introduce a backgroundfocused image conditioning framework for object removal that utilizes focused embeddings and proposes a suppression method for removing foreground concept in the conditioning signal. By explicitly using such conditioning, it prevents common failure modes such as foreground leakage and mask-shape-driven hallucinations. Second, we develop a motion-conditioned video generation and editing method that achieves successful motion transfer from a reference to the generated video. By directly updating the positional embeddings, it achieves high fidelity motion aligned generation without sacrificing the textual condition alignment. Together, these contributions advance controllable visual editing by demonstrating that pretrained generative models contain useful behaviors beyond their explicit training objectives, and that providing the right guidance can unlock robust control with improved fidelity, consistency, and user-directed precision.Item Open Access Privacy preserving split learning(2026-01) Shabbir, AqsaSplit Learning enables collaborative model training without sharing raw data; however, its traditional form remains vulnerable because plaintext intermediate activations and gradients can leak sensitive information. These leakages enable attacks such as input reconstruction, label and property inference, and model manipulation, undermining the privacy guarantees that split learning aims to provide. This thesis addresses these limitations by designing a privacy-preserving split learning system. The proposed design inverts the conventional workflow so that labels, loss computation, and backpropagation remain entirely on the client, while all server-side computation is performed in the encrypted domain using homomorphic encryption. As a result, the server never observes plaintext activations, labels, or gradients during training, eliminating known attack surfaces. To make encrypted split learning practical, the thesis introduces an estimator that models ciphertext noise growth, bootstrapping requirements, and end-to-end runtime as functions of network architecture and split placement. The estimator jointly captures encrypted server-side computation and plaintext client-side computation, enabling noise- and budget-aware split selection without exhaustive empirical profiling. Our contributions include: (i) identifying and analyzing the components of traditional split learning that lead to privacy leakage, (ii) designing an inverted split learning system that eliminates information leakage by executing all server-side computation over encrypted data, and (iii) developing an estimator that enables the efficient use of homomorphic encryption in split learning under cryptographic and computational constraints.Item Open Access Genome reconstruction in beacons using summary statistics(2026-01) Saleem, KousarGenomic data-sharing beacons, designed to safeguard individual privacy while promoting scientific discovery, remain critically vulnerable to sophisticated genome reconstruction attacks that leverage publicly released summary statistics. This thesis systematically advances the understanding and effectiveness of these attacks, challenging the assumption that releasing simple allele frequencies (AFs) is a secure protocol. The fundamental flaw lies in the beacon’s protocol to account for linkage disequilibrium (LD), which allows a malicious party to infer individual data from combined summary statistics. Our foundational contribution established the feasibility of this threat with a two-stage optimization-based algorithm that utilized public LD and AFs, achieving an F1-score of 70% and confirming the inherent privacy risk. Building upon this, the research introduces a more powerful methodology: a single-stage joint optimization framework that unifies the objectives of SNP correlation and allele frequency alignment. This formulation not only increases reconstruction performance to an average F1-score of 71.4% but also yields substantial computational savings: reconstructing 2,000 SNPs across 100 individuals now requires 7.4 hours instead of 10 hours, representing a 26% reduction in runtime. Collectively, these results provide compelling evidence of the increasing practicality and sophistication of genome reconstruction attacks against beacon protocols, underscoring the urgent need for the development of robust, adaptive, and correlation-aware defense mechanisms to protect the integrity and privacy of genomic data infrastructure.Item Open Access Design of non-monetary incentives for efficiency in selfish routing via strategic intersection control(2025-12) Saltan, YusufUrban transportation networks routinely suffer from inefficiencies caused by selfish routing, whereby individual drivers select routes that minimize their own travel time rather than overall system delay. This decentralized behavior leads to user equilibria that can significantly deviate from system-optimal flows. Although monetary tolls can theoretically eliminate such inefficiencies, their practical, political, and equity-related limitations motivate the development of alternative, non-monetary control mechanisms. This thesis develops and analyzes two intersection-based incentive mechanisms that leverage modern Autonomous Intersection Management (AIM) to influence route choices and steer selfish routing toward socially efficient outcomes without monetary transfers. The first mechanism, termed Strategic Priority-Based Scheduling (SPBS), introduces small, route-dependent priority adjustments at intersections, thereby inducing controlled, path-dependent waiting times. Analytical examples, including Pigou’s network, show that even minimal priority asymmetries can substantially reduce inefficiency. These insights are further validated through high-fidelity microscopic simulations, demonstrating the mechanism’s feasibility under realistic driving and queueing dynamics. The second mechanism generalizes this approach through an analytical framework based on timestamp offsets. Intersections apply small additive adjustments to vehicles’ effective arrival times, inducing path-dependent node delays while preserving uniqueness of equilibrium travel costs, even when multiple equilibrium flows exist. This structure enables a bilevel optimization formulation in which a system planner designs timestamp offsets while anticipating user-equilibrium responses. Calibration using simulation-generated intersection delay data for the Sioux Falls network yields realistic quartic node cost models, and large-scale numerical experiments show that timestamp-based incentives can eliminate up to 68% of the inefficiency at user equilibrium, even under tight operational constraints. Taken together, these results demonstrate that intersections, traditionally viewed as network bottlenecks, can be transformed into powerful non-monetary control instruments. By exploiting the capabilities of modern AIM, the proposed mechanisms provide practical, scalable, and analytically grounded tools for improving network-wide efficiency without relying on tolls or major infrastructure modifications.Item Open Access Single-entry raffles with cryptographic verifiability and privacy(2026-01) Bayramoğlu, KeremLotteries are an integral part of generating revenue for public initiatives through regulated selection mechanisms. However, traditional raffle systems often face challenges related to privacy, fairness, and verifiability. To address these challenges, this thesis presents a novel system architecture for conducting single-entry raffles that ensures privacy and fairness through verifiability. In this work, two distinct architectures are proposed: (i) Centralized Architecture: Participants receive random numbers and unique IDs, which are added to an RSA accumulator along with their rank. The final accumulator is published in a trusted space, allowing participants to verify their entries. A verifiable random number generator selects the winner, with inclusion proofs available via queries. (ii) Blockchain- Based Architecture: Certificate authority hashes are mapped to smart contract numbers, enabling verifiable winner selection and participant inclusion checks for a private, auditable raffle. Proposed system leverages RSA accumulators and verifiable random functions to maintain both transparency and confidentiality, and the winner selection process is both transparent and auditable, maintaining the integrity of the raffle. By offering both centralized and blockchain-based solutions, this approach provides flexibility while maintaining the core principles of fairness and privacy. The proposed raffle system guarantees fairness, verifiability, and privacy in a single entry setup without requiring a trusted third party, thereby establishing a secure and transparent approach to online raffle management. The implementation is available at https://github.com/ASAP-Bilkent/ private-decentralized-lottery.Item Open Access Nextstereo: directionally driven channel expansion gives adaptive real-time stereo(2026-01) Ekinci, Ekin BerkWe present NeXtStereo, a lightweight stereo disparity estimation network designed for real-time depth perception. NeXtStereo builds on Widened ConvNeXtV2 blocks that strengthen cost aggregation while leveraging the scalability and generalization behavior of the ConvNeXt family. In addition, we introduce Directionally Modulated Attention (DMA), a novel attention mechanism that incorporates geometric priors to modulate features using directional cues. Together, these components improve structural detail recovery in challenging regions such as object boundaries, thin structures, and texture-weak areas, without relying on heavy 3D aggregation stacks. We evaluate NeXtStereo on SceneFlow, KITTI 2012/2015, and Middlebury, where it achieves a favorable accuracy/efficiency trade-off among real-time models and improves cross-domain robustness, with NeXtStereo-L achieving the lowest > 2px error among the compared methods. We also study adaptation to the MS2 outdoor driving dataset and observe reliable transfer under fine-tuning. Furthermore, NeXtStereo demonstrates strong compatibility with convolutional Low-Rank Adaptation (LoRA), enabling parameterefficient domain adaptation with improved stability compared to relevant realtime stereo matching baselines. Finally, we analyze selective 3D cost aggregation via a targeted ablation that replaces the first 1/4-scale aggregation block with a 3D ConvNeXt-style cost aggregation operator, characterizing the resulting accuracy/ efficiency trade-offs.Item Open Access A reinforcement learning-based approach for dynamic privacy protection in genomic data sharing beacons(2026-01) Aghdam, Masoud PoorghaffarThe rise of genomic sequencing has led to significant privacy concerns due to the sensitive and identifiable nature of genomic data. The Beacon Project, initiated by the Global Alliance for Genomics and Health (GA4GH), was designed to enable privacy-preserving sharing of genomic information via an online querying system. However, studies have revealed that the protocol is vulnerable to membership inference attacks, which can expose the presence of individuals in sensitive datasets. Existing countermeasures often degrade system utility or fail to adapt to evolving attack strategies due to their static nature. To address this, we model the interaction between the beacon and the adversary as a Stackelberg game. In this formulation, the attacker acts as the leader who selects a query strategy to maximize inference, while the defender acts as the follower who optimizes the response honesty to minimize privacy loss while maintaining utility. However, classical game-theoretic solutions are computationally intractable due to the vast search space of genomic queries. In this study, we bridge this gap by presenting a dynamic learning-based framework to approximate these equilibrium strategies. We employ a multi-agent reinforcement learning environment to solve this continuous game, training an adaptive defense policy that regulates response honesty against a sophisticated adversary capable of strategic query ordering and behavioral mimicry. Unlike conventional static defenses, this mechanism is capable of adapting in real time, dynamically differentiating between legitimate and adversarial query patterns to apply tailored policies. Consequently, this method enhances both privacy and utility, effectively countering sophisticated and evolving threats.Item Embargo Hardware acceleration for adaptive gamma correction in embedded systems(2026-01) Sarıçam, İlaydaEnhancing images in low-light conditions is a critical task in various domains, including photography, security systems, military, and autonomous driving models. These fields often require image processing and analysis tasks in low-light images due to a lack of lighting sources and shadows. However, there are limitations and bottlenecks in low-light image enhancement, such as under-enhancement, over-enhancement, and high power consumption. This thesis introduces a region-wise adaptive gamma correction (AGC) method, which is a non-learning-based approach, to enhance the visibility of lowlight images. In this study, to select the optimal gamma value adaptively, the image is partitioned into regions based on detected edges and ridges. Then, the optimal gamma value for each region is computed from average intensity, brightness, luminance, and RGB values. As a result, the gamma correction is applied to each region separately. With this region-wise approach, under-enhancement and over-enhancement of the input image are prevented. Furthermore, our approach is tailored for low-light image enhancement tasks in power-limited systems. Therefore, our implementation uses low-power devices rather than high-performance GPUs and CPUs, as typically used in the literature. To evaluate our results and output images, we use Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Mean Squared Error (MSE), Naturalness Image Quality Assessor (NIQE), runtime, and power consumption as evaluation metrics. Also, to observe the effectiveness of our approach and compare it with prior studies, we conducted experiments on two datasets, namely, LOL and MITAdobe FiveK. When compared with previous non-learning-based methods, our approach achieves a twofold improvement in PSNR. Furthermore, we reduce the power consumption of the low-light image enhancement by more than 250X.Item Embargo Digital microfluidics for biomedical applications(2026-01) Güngen, Murat AlpPoint-of-care (PoC) diagnostic technologies aim to reduce global healthcare disparities by enabling decentralized testing without reliance on advanced laboratory infrastructure. Despite significant progress, many PoC systems remain largely confined to academic research settings, limiting their clinical and societal impact. Digital microfluidics (DMF), a programmable microfluidic approach based on electrowetting-on-dielectric (EWOD), enables the two dimensional controlled manipulation of discrete droplets and offers substantial advantages in flexibility, reconfigurability, and functional integration over conventional continuousflow microfluidic platforms. These characteristics make DMF a promising technological foundation for PoC diagnostics. In the first part of this thesis, the capabilities of a commercially available DMF platform, OpenDrop, are explored for biomedical applications relevant to PoC testing. The platform is employed to perform extracellular vesicle isolation, enzyme-linked immunosorbent assays. In addition, OpenDrop is used to rapidly generate image-based datasets to evaluate the feasibility of applying an in-house, U-Net-based, computer vision framework for droplet detection and classification on DMF devices. Building upon these demonstrations, the second part of this thesis focuses on the design, characterization, and fabrication of a custom DMF platform, designated “Markut.” This development includes computational analysis of the Young–Lippmann equation to guide EWOD optimization, systematic electrowetting experiments conducted in both air and oil to assess dielectric material performance, and the realization of a functional device architecture informed by these results. To support molecular diagnostic applications, a temperature control module is integrated to enable loop-mediated isothermal amplification (LAMP) assays. Furthermore, computer vision–based colorimetric analysis and electrical impedance measurements are incorporated to reliably distinguish between positive and negative LAMP outcomes. Overall, this thesis demonstrates the feasibility and versatility of both commercially available and custom-built DMF platforms for PoC-relevant biomedical applications. The presented results highlight DMF as a robust and scalable technology with strong potential to facilitate the translation of microfluidic diagnostics from laboratory research toward practical, real-world deployment.Item Open Access Robust deep learning under distribution shift: invariant feature learning and reliable test-time adaptation(2026-01) Karimi, SaeidDeep learning models often suffer significant performance degradation when deployed in environments whose data distributions differ from those encountered during training. This distribution shift remains a central challenge for robust visual recognition. Although Domain Generalization (DG) strives to learn models that generalize to unseen domains without accessing target data, recent studies show that many DG techniques yield limited improvements over empirical risk minimization due to reliance on spurious, domain-specific features. To address this issue, the first part of this thesis introduces Specific Domain Training (SDT), a method that disentangles spurious and invariant features via specific-domain sampling, masking, and variance-aware weight averaging. SDT improves both theoretical robustness and practical performance on DG benchmarks. The second part of the thesis focuses on Test-Time Adaptation (TTA), which adapts a pretrained model to incoming test samples without labels. Existing TTA methods often rely on noisy pseudo-labels and fail to leverage informative structure from source domains. To mitigate these limitations, we develop three complementary approaches. SATA uses source-domain style statistics to identify style-invariant test samples, ensuring stable entropy minimization while regularizing unreliable samples through consistency constraints. AdaPAC leverages subclass prototypes extracted using class-specific clustering to capture intra-class structure, selecting test samples that align well with source clusters and adapting the model with prototype-guided contrastive objectives. Shift-ACT introduces shift-aware, classspecific dynamic thresholding based on confidence discrepancies between source and target distributions, enabling reliable sample selection under class-wise distribution shifts. Together, these contributions advance the reliability of DG and TTA by reducing reliance on spurious cues, improving sample selection, and enabling robust adaptation under distribution shifts.Item Open Access Finite-dimensional robust controller design for infinite-dimensional systems(2026-01) Bilgin, İrem CansuTime delay occurs in many systems like control systems, process control applications, and large-scale mechanical structures. Due to the infinite-dimensional nature of delays, their controller design is more complicated since many traditional control methods depending on rational models of the plant. The Smith Predictor has been widely studied as a control structure for delay systems. It compensates for dead time by separating the nominal plant dynamics from the delay element. Despite the simple structure and effectiveness, the Classical Smith Predictor has some limitations, such as its sensitivity to modeling errors and the assumption that the plant model is precisely known. This study presents and extension to the classical Smith predictor and examines the robust stabilization of a specific category of multi-input-multi-output (MIMO) infinite dimensional linear time-invariant systems. Significant practical applications include finite dimensional MIMO systems that experience time delays in either the input or output channels. Controllers are developed by using a stable transfer matrix created using tangential Nevanlinna-Pick interpolation. The structure of the controller can be viewed as an extension of Smith predictors tailored for systems with time delays. A finite dimensional approximation of the controller and its effects on the robust stability of the feedback system are also addressed, accompanied by four numerical examples.Item Unknown Generalized Green functors and semisimplicity(2025-12) Akın, Mahmut EsatA pursuit of abstract generality: the theory of biset functors provides a framework for the globalization of Mackey functors. In this setting, linear morphisms between two finite groups are indexed by conjugacy classes of subgroups of their direct product. Although this formalism has proved useful in many situations, there exist Mackey functors that do not admit a global description within the theory of biset functors. Restricting attention to Green biset functors, and taking as a model an object introduced by Boltje and Danz, we introduce a generalization, what we call the theory of Green prebiset functors. In this extended setting, linear morphisms between two finite groups are indexed by all subgroups of the direct product. The conjugacy classes then arise as orbit sums with respect to the conjugation action. In a similar way, we obtain Green biset functors as special cases of Green prebiset functors. The results obtained in our framework are partial and are discussed in the introduction.Item Embargo Delayed droplet coalescence during droplet shedding from superhydrophobic surfaces(2025-12) Tekinalp, EnginDroplets shedding from inclined functional surfaces has important implications in dropwise condensation and anti-icing applications because it allows for fresh nucleation sites, effectively enhancing the heat transport. It is particularly important for dropwise condensation, which has been showed to be highly effective in terms of heat transfer when compared to filmwise condensation. Here, we study the shedding behavior of water droplets from inclined textured superhydrophobic substrates. Our experimental setup, which consists of high speed imaging and a piezoelectric dispenser, is specifically tailored to image the shedding droplets from the side and the top, while depositing microdroplets to a larger droplet at a predefined rate. Our results show that at the onset of droplet shedding, the coalescence of the microdroplets, which was instantaneous before the shedding occurs, starts to be delayed significantly, as evidenced by the presence of a number of satellite droplets on the surface of the larger droplet. Such a delay would cause the heat transfer enhancements from dropwise condensation to plummet and needs to be studied in detail. For that purpose, we studied three different explanations; namely antibubble formation, cloaking, and instabilities. Our investigation eliminated antibubble formation and cloaking as possible explanations, and determined the instabilities caused by the rapid rotation of the droplet while shedding generated an outward acceleration which limited the capacity to coalesce. This rotation was caused by a combination of external forces applied by the micro/nanostructures on the surface, and internal forces applied by the rapidly changing Laplace pressure. Our study presents a fundamental understanding of a unique fluid dynamics phenomenon with many implications to condenser surface design with the potential to be further generalized into the whole area of interfacial physics.Item Unknown Analysis of power factor correction in AC-DC converters using frequency-clamped continuous conduction mode controller(2025-12) Salar, OğuzWith increasing electricity demand throughout the world, the regulations for efficiency and power quality also increases. In order to meet this increased demand in AC/DC converters, power factor corrector systems are used to achieve a power factor of 1.0 and reduce the transmission of unused power and distortion in the distribution network. In order for these systems to function, several controller methods are used with some being specially designed to reduce electromagnetic interference (EMI). In this thesis, several common controllers such as the PI controller, hysteresis controller and a frequency-clamped critical conduction mode (FCCrM) controller are simulated using the SIMPLORER program, in addition to a newly proposed frequency-clamped continuous conduction mode (FCCCM) controller. The results of the simulations show the FCCCM controller can be used as a controller alternative with options to optimize the results based on the operating conditions.Item Unknown Power generation mechanics of a multi-stage thermoelectric generator with metal-organic framework coating(2025-12) Özkan, EgeThermoelectric power generators (TEGs) have the potential for replacing batteries for electrical circuits with small power consumption. However, one of the bottlenecks that hold back TEGs is their low thermal to electrical energy conversion efficiency. Passive thermal management of TEGs and stacking multiple TEGs on top of each other can increase their conversion efficiency and maximum power output. In this thesis, we study multi-stage TEGs with Metal-Organic Framework (MOF) coating layer as a passive thermal management to increase the power output of TEGs and develop mathematical models and COMSOL simulation model to understand and decouple the other complex physics. To develop the mathematical model, thermoelectrical effects, adsorption and desorption in porous materials, heat and mass transfer in porous materials and thermophysical and transport properties of humid air are studied. The developed mathematical models later used in the development of a multi-physics simulation in COMSOL. The numerical results obtained from the developed COMSOL model showed that with desorption and thermal radiation, TEGs with MOF coating generated electrical energy that is three times of the electrical energy that is produced by TEGs with no MOF coating. Moreover, with only radiative cooling, the generated energy was double of the generated energy from TEGs without MOF coating.Item Unknown 3D implementation of bias-corrected phase-based CR-MREPT(2025-12) Çan, Mustafa KaanElectrical property imaging has been a point of interest for decades as it has promising applications such as anatomical imaging, tumor detection, stroke detection and classification, early diagnosis of Alzheimer disease and dementia, RF safety and SAR calculations, and therapy planning and monitoring. Among different electrical property imaging methods, MREPT has the advantage of using a standard MRI device so that its non-invasive, does not use external coils or electrodes, and does not rely on ionizing radiation. Many MREPT methods are proposed, but most of them suffer from similar limitations such as internal boundary artifacts, transceive phase approximation, concave bias, and long imaging times caused by high SNR requirements. These problems significantly limit the clinical feasibility of MREPT. cr-MREPT, especially in phase-based form, overcomes internal boundary artifacts without using the transceive phase approximation but still suffers from concave bias and high SNR requirements. Moreover, even tough implementation in 3D is straightforward, phase-based cr-MREPT is not previously employed in 3D since hardware requirements and reconstruction times make the method impractical for clinical applications. In this thesis, we aim to develop an MREPT method that overcomes all the mentioned limitations and is feasible to use in clinical applications. To achieve this, a novel bias correction method is proposed to overcome the concave bias. The proposed method is evaluated on the basis of simulation and experimental phantoms, and the conductivity distributions are successfully reconstructed in each case. Later on, the bias corrected phase-based cr-MREPT method is implemented in 3D and a new, practical reconstruction method is proposed to improve feasibility of the method applying on conductivity reconstructions of large objects. The method divides the object into smaller volumes so that the reconstruction of each volume is more manageable and can be parallelized to accelerate the solution process. Small region sizes are determined by performing sensitivity analysis on phase-based cr-MREPT, and the performance of the proposed method is proven on various noiseless and noise-added simulation data. Last but not least, a cr- MREPT library is developed to improve the availability of 2D/3D bias corrected cr-MREPT for researchers, increase collaboration between different groups and provide better comparative evaluation of different methods.Item Unknown Liquid-assisted approaches in thermal fiber drawing techniques to develop conductive polymer fibers(2025-12) Fatima, AroojFlexible electronic fibers that combine scalable manufacturability with multimodal physiological sensing remain challenging due to the conflicting requirements of conductivity, porosity, mechanical compliance, and environmental robustness. Here, an in situ thermally induced phase separation (TIPS) strategy integrated into thermal fiber drawing (TFD) is proposed to produce continuous porous graphene–polymer nanocomposite fibers with independently tunable pore architecture and electrical properties. Starting from a solvent-borne graphene/polyvinylidene fluoride (PVDF) slurry encapsulated within an elastomeric cladding, tens-of-meters-long fibers are produced from a compact preform, while high graphene loadings are accommodated, enabling the formation of a percolated conductive network embedded within a phase-separated polymeric matrix. The resulting fiber exhibits an electrical conductivity of (1.35 ± 0.96) × 10−3 Sm−1, indicative of a moderately percolated network that balances electrical transport and structural porosity. The fabricated fibers are operated as multimodal wearable sensors, including: (i) a temperature sensor exhibiting a stable output and high temperature sensitivity with a negative temperature coefficient of resistance (TCR = 0.558 ◦C−1); (ii) a pressure sensor demonstrating a reliable cyclic response; and (iii) a dry-electrode cardiovascular monitoring interface, for which impedance magnitude and phase behavior are observed to closely match those of commercial electrodes at low frequencies, while the fundamental features of signals recorded from human skin are captured. The removable elastomeric cladding, imparting water resistance, is shown to support textile integration and stable operation under humid conditions. In the second part of this thesis, the fabrication and characterization of highly conductive polymer fibers incorporating carbon nanotubes (CNTs) were systematically investigated. A liquid-assisted thermal drawing approach was employed, in which a homogeneous carbon nanotube/propylene carbonate (CNT/PC) slurry was introduced into the fiber preform to enable continuous material feeding during the thermal drawing process. This methodology facilitated uniform nanoscale dispersion of CNTs within the polymer matrix and promoted the formation of interconnected conductive pathways along the fiber axis during drawing. As a result of this optimized liquid-assisted process, the fabricated fibers exhibited an electrical conductivity as high as 95 Sm−1, which is more than two orders of magnitude higher than that of conventional conductive polymer films. This significant enhancement is attributed to the effective CNT dispersion, alignment, and percolation achieved under continuous thermal elongation, highlighting the advantages of fiber-based architectures over planar film counterparts for efficient electrical transport. Overall, this thesis establishes scalable thermal drawing–based strategies for engineering highly conductive and porous conductive polymer fibers, providing a unified framework that bridges fundamental conductive network formation with multifunctional fiber-based sensing platforms for wearable and textile-integrated applications.Item Unknown Selective routing problems in humanitarian operations(2025-12) Dursunoğlu, Çağla FatmaIn this thesis, we investigate the critical role of demand frequency in disaster response. We categorize demand into three types: one-time, continuous, and periodic. We define three distinct problems for each demand type. For one-time demand, we define a location-allocation problem with service duration decisions. For continuous demand, we define a maximal covering location problem. For periodic demand, we define a scheduling problem with duration decisions. For each problem, we develop tailored mathematical formulations that integrate realistic demand functions. Firstly, we consider one-time demand, which is characterized by immediate and non-recurring needs in the aftermath of a disaster. We develop three integer programming models for location-allocation problem with service duration decisions which incorporate a stepwise demand function that captures the gradual decline in demand over time. We compare the mathematical models on real-world datasets in terms of solution time and optimality gap for large instances. Through extensive computational experiments, we observe that the best-performing model achieves optimal or near-optimal solutions significantly faster with smaller optimality gaps, especially when solving large-scale instances. The results show that adopting temporal coverage of demand at different locations satisfies more demand over time. We observe that dynamically changing disaster environment requires adopting more agile deployment strategies for mobile service units. Secondly, we consider continuous demand to represent critical and ongoing service requirements. We take into account different types of mobile service units (highcapacity, medium-capacity, and low-capacity). We provide a mathematical formulation maximal covering location problem that determines the optimal locations of mobile service units and allocation of demand points to mobile service units. We also integrate a demand function that accounts for diminishing demand due to coverage. We consider two concepts for how demand is satisfied from covered areas: binary and gradual coverage. In binary coverage, demand is fully satisfied if the demand location is within the coverage radius of a mobile service unit. On the other hand, in gradual coverage, the total demand satisfied drops significantly because it follows a decay function to reflect a decrease with respect to distance. According to our results, we observe that the model prioritizes locating mobile service units closer to demand areas to decrease the decay effect. Finally, we consider periodic demand to represent recurring service needs over specific time intervals. We introduce four mathematical formulations for a scheduling problem with duration decisions: compact 5-index, 4-index models and two 3-index non-compact models. The latter two are derived through a Benders-type projection method and solved using a branch-and-cut algorithm strengthened with valid inequalities. Although decomposition-friendly formulations are attractive, they face significant computational overhead. We also conduct a sensitivity analysis on the parameters and identify the most influential setting. Additionally, we analyze demand functions, including linear, exponential, sigmoid, quadratic, logarithmic, and step functions. We evaluate those functions by post-processing their solutions with a neutral demand function for fair comparison. We finally provide a case study using data from the 2023 Kahramanmaraş earthquakes to validate the model’s practical applicability. According to the solutions, the high-density districts receive extended service durations, while the low-density districts receive the minimum visit requirements. It is noted that to enhance fairness, the parameters regarding the minimum requirement can be updated for low-density areas. Based on these comprehensive computational studies across all three demand types, we provide strategic insights for deploying mobile service units in complex disaster settings.Item Unknown Development of an RNA-seq analysis application, iDEAlist, interactive differential expression analysis of gene lists(2025-12) Demirbaş, AlgıTranscriptomics data analysis has been revolutionary in increasing our understanding of changes in tissues and cells in response to treatments, time, and conditions across many species, including humans, mice, and zebrafish. There are multiple RNA-seq dataset analysis applications developed in R Shiny, and many of these share a core analysis program with additional strengths. However, to the best of our knowledge, there is no application that is more geared towards the analysis of all differentially expressed genes (DEG) or selected gene lists using Venn diagrams of multiple user-defined contrasts. This approach enables identifying the unique and common DEGs among the given contrasts, and these filtered gene sets can be annotated with different functional terms and visualized based on plots and networks. To address this gap, iDEAlist was developed as an R Shiny web application. iDEAlist supports the analysis of multiple contrasts, either concurrently or individually, through comprehensive filtering options, Venn diagrams, ORA/GSEA, and visualization options. The application allows users to filter DEGs by uploading gene lists or by selecting pathway terms from the Reactome, KEGG, and GO databases. Most importantly, all these features make iDEAlist very applicable to transcriptomics data analysis performed on larval whole zebrafish, which includes all of the organs in the bulk tissue. By using specified gene lists, one can understand the DEGs and their functional importance in a given function or tissue-specific gene set. The use of iDEAlist was demonstrated by using both a publicly available dataset (GSE193433) and an in-house dataset in zebrafish. These assessments demonstrated that using REACTOME and KEGG based gene list selection, e.g., “complement system” containing key words or filtering based on public tissue enriched gene lists, allowed the user to focus on a group of genes that were significantly modulated under heritable resilience/susceptibility to osmotic and physical (netting) stress, and dietary interventions in whole zebrafish larvae. In addition, iDEAlist is not only specific to zebrafish but also can be used with human and mouse RNA-seq count data, making gene list-focused analysis possible with ease and automation