Dept. of Electrical and Electronics Engineering - Ph.D. / Sc.D.
Permanent URI for this collection
Browse
Browsing Dept. of Electrical and Electronics Engineering - Ph.D. / Sc.D. by Title
Now showing 1 - 20 of 175
Results Per Page
Sort Options
Item Open Access 3D electron density estimation in the ionosphere by using IRI-Plas model and GPS measurements(Bilkent University, 2016-05) Tuna, Hakan; Arıkan, OrhanThree dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the Slant Total Electron Content (STEC) along the propagation path between a GPS satellite and ground based receiver station. However, the estimated GPS-STEC are very sparse and highly non-uniformly distributed for obtaining reliable 3D electron density distributions derived from the measurements alone. Standard tomographic re- construction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model based electron density distributions are produced according to the general trends of the iono- sphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this thesis, a novel regional 3D electron density distribution reconstruction technique, namely IONOLAB-CIT, is proposed to as- similate GPS-STEC into physical ionospheric models. The IONOLAB-CIT is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere ex- tended to Plasmasphere (IRI-Plas) model generated STEC and GPS-STEC. The IONOLAB-CIT is applied successfully for the reconstruction of electron den- sity distributions over Turkey, during calm and disturbed hours of ionosphere using Turkish National Permanent GPS Network (TNPGN-Active). Reconstruc- tions are also validated by predicting the STEC measurements that are left out in the reconstruction phase. The IONOLAB-CIT is compared with the real ionosonde measurements over Greece, and it is shown that the IONOLAB-CIT results are in good compliance with the ionosonde measurements. The results of the IONOLAB-CIT technique are also tracked and smoothed in time by using Kalman filtering methods for increasing the robustness of the results.Item Open Access Ablation cooled material removal with bursts of ultrafast pulses(Bilkent University, 2016-01) Kerse, M. Can; İlday, F. ÖmerMaterial processing with femto-second pulses allows precise and non-thermal material removal and being widely used in scientific, medical and industrial applications. However, due to low ablation speed at which material can be removed and the complexity of the associated laser technology, where the complexity arises from the need to overcome the high laser induced optical breakdown threshold for e cient ablation, its potential is limited. Physics of the interaction regime hinders a straightforward scaling up of the removal rate by using more powerful lasers due to e ects such as plasma shielding, saturation or collateral damage due to heat accumulation. In analogy to a technique routinely used for atmospheric re-entry of space shuttles since 1950s, ablation cooling, is exploited here to circumvent this limitation, where rapid successions of pulses repeated at ultrahigh repetition rates were applied from custom developed lasers to ablate the target material before the residual heat deposited by previous pulses di use away from the interaction region. This constitutes a new, physically unrecognized and even unexplored regime of laser- material interactions, where heat removal due to ablation is comparable to heat conduction. Proof-of-principle experiments were conducted on a broad range of targets including copper, silicon, thermoelectric couplers, PZT ceramic, agar gel, soft tissue and hard tissue, where they demonstrate reduction of required pulse energies by three orders of magnitude, while simultaneously increasing the ablation e ciency by an order of magnitude and thermal- damage-free removal of brain tissue at 2 mm3/min and tooth at 3 mm3/min, an order-of-magnitude faster than previous results.Item Open Access Accurate and efficient solutions of electromagnetic problems with the multilevel fast multipole algorithm(Bilkent University, 2009) Ergül, Özgür Salih; Gürel, LeventThe multilevel fast multipole algorithm (MLFMA) is a powerful method for the fast and efficient solution of electromagnetics problems discretized with large numbers of unknowns. This method reduces the complexity of matrix-vector multiplications required by iterative solvers and enables the solution of largescale problems that cannot be investigated by using traditional methods. On the other hand, efficiency and accuracy of solutions via MLFMA depend on many parameters, such as the integral-equation formulation, discretization, iterative solver, preconditioning, computing platform, parallelization, and many other details of the numerical implementation. This dissertation is based on our efforts to develop sophisticated implementations of MLFMA for the solution of real-life scattering and radiation problems involving three-dimensional complicated objects with arbitrary geometries.Item Open Access Activity recognition invariant to position and orientation of wearable motion sensor units(Bilkent University, 2019-04) Yurtman, Aras; Özaktaş, Billur BarshanWe propose techniques that achieve invariance to the placement of wearable motion sensor units in the context of human activity recognition. First, we focus on invariance to sensor unit orientation and develop three alternative transformations to remove from the raw sensor data the effect of the orientation at which the sensor unit is placed. The first two orientation-invariant transformations rely on the geometry of the measurements, whereas the third is based on estimating the orientations of the sensor units with respect to the Earth frame by exploiting the physical properties of the sensory data. We test them with multiple state-of-the-art machine-learning classifiers using five publicly available datasets (when applicable) containing various types of activities acquired by different sensor configurations. We show that the proposed methods achieve a similar accuracy with the reference system where the units are correctly oriented, whereas the standard system cannot handle incorrectly oriented sensors. We also propose a novel non-iterative technique for estimating the orientations of the sensor units based on the physical and geometrical properties of the sensor data to improve the accuracy of the third orientation-invariant transformation. All of the three transformations can be integrated into the pre-processing stage of existing wearable systems without much effort since we do not make any assumptions about the sensor configuration, the body movements, and the classification methodology. Secondly, we develop techniques that achieve invariance to the positioning of the sensor units in three ways: (1) We propose transformations that are applied on the sensory data to allow each unit to be placed at any position within a pre-determined body part. (2) We propose a transformation technique to allow the units to be interchanged so that the user does not need to distinguish between them before positioning. (3) We employ three different techniques to classify the activities based on a single sensor unit, whereas the training set may contain data acquired by multiple units placed at different positions. We combine (1) with (2) and also with (3) to achieve further robustness to sensor unit positioning. We evaluate our techniques on a publicly available dataset using seven state-of-the-art classifiers and show that the reduction in the accuracy is acceptable, considering the exibility, convenience, and unobtrusiveness in the positioning of the units. Finally, we combine the position- and orientation-invariant techniques to simultaneously achieve both. The accuracy values are much higher than those of random decision making although some of them are significantly lower than the reference system with correctly placed units. The trade-off between the exibility in sensor unit placement and the classification accuracy indicates that different approaches may be suitable for different applications.Item Open Access Adaptive observer designs for friction estimation in position control of simple mechanical systems with time delay(Bilkent University, 2021-09) Odabaş, Caner; Morgül, ÖmerFriction force/torque is a well known natural effect that can cause performance degradation or even instability in mechanical systems, although it sometimes can be disregarded in closed loop feedback design phase. Hence, friction modeling and cancellation methods can be vital to achieve desired robustness and performance criteria in position control problems. Basically, the topic of friction cancellation is divided into two main categories named model based and non-model based methods. Friction modeling is a broad area of research and there are lots of different modeling approaches in various complexities. Among these approaches, Coulomb Model is one the simplest yet fundamental models. Nevertheless, in some cases, being a classical static model, it is inadequate to exhibit the dominant friction components occurring at different motion stages such as break-away force, stick-slip motion, pre-sliding behavior or friction lag. Generally, dynamical models, i.e. LuGre Model, are more advanced as a result, they are better to describe such friction effects. Unfortunately, for these cases, the number of friction parameters are increased. In fact, there is a trade-off between model complexity and parameter identification. A desired system response may not be achieved when model parameters do not coincide with the existing friction coefficients. In this manner, precise identification of each parameter can be challenging when there are many of them. Besides, some of these parameters might be time varying due to environment, temperature, material properties, position, etc. Therefore, non-model based adaptive schemes are prevalent in the literature since these methods do not require any parameter identification. In this study, we focus on adaptive observer based friction compensation techniques and provide some stability conditions. First, we consider simple second order mechanical systems with or without time delay under Coulomb friction. To estimate the Coulomb friction, we first consider Friedland-Park observer. Then, some necessary conditions are stated to extend the estimation function in the observer structure to a larger class of functions. Especially measurement delay can be significant since observers estimate friction based on the velocity measurements. Therefore, it is proposed to employ a velocity predictor either based on numerical differential equation solvers or inverse Pade approximant when the existing time delay is large. What is more, a new observer design that considers friction and velocity error dynamics together is proposed as a novel contribution. Extensive MATLAB simulations are conducted to investigate the performances of proposed observers in a closed loop position control system with and without delay. To this end, Smith predictor and ITAE index-based designs are considered to utilize a position controller. In some of these simulations, LuGre model is preferred to mimic the actual friction instead of Coulomb friction in order to observe the effects of dynamic parameters. Moreover, some experiments are performed on DC motor platform driven by Arduino Uno microcontroller. Under the light of acquired results, observer based friction compensation improves the system performance even existing friction cannot be confined to Coulomb coefficient, especially when the implemented controller has low bandwidth. Also, in terms of practicability, it is an advantage that these observer structures do not require any parameter identification.Item Open Access Airborne cmut cell design(Bilkent University, 2014) Yılmaz, Aslı; Köymen, HayrettinAll transducers used in airborne ultrasonic applications, including capacitive micromachined ultrasonic transducers (CMUTs), incorporate loss mechanisms to have reasonably wide frequency bandwidth. However, CMUTs can yield high efficiency in airborne applications and unlike other technologies, they offer wider bandwidth due to their low characteristic impedance, even for efficient designs. Despite these advantages, achieving the full potential is challenging due to the lack of a systematic method to design a wide bandwidth CMUTs. In this thesis, we present a method for airborne CMUT design. We use a lumped element circuit model and harmonic balance (HB) approach to optimize CMUTs for maximum transmitted power. Airborne CMUTs have narrowband characteristic at their mechanical part, due to low radiation impedance. In this work, we restrict the analysis to a single frequency and the transducer is driven by a sinusoidal voltage with half of the frequency of operation frequency, without any dc bias. We propose a new mode of airborne operation for CMUTs, where the plate motion spans the entire gap. We achieve this maximum swing at a specific frequency applying the lowest drive voltage and we call this mode of operation as Minimum Voltage Drive Mode (MVDM). We present equivalent circuit-based design fundamentals for airborne CMUT cells and verify the design targets by fabricated CMUTs. The performance limits for silicon membranes for airborne applications are derived. We experimentally obtain 78.9 dB//20Pa@1m source level at 73.7 kHz, with a CMUT cell of radius 2.05 mm driven by 71 V sinusoidal drive voltage at half the frequency. The measured quality factor is 120. CMUTs can achieve a large bandwidth (low quality factor level) as they can be manufactured to have thin plates. Low-quality-factor airborne CMUTs experience increased ambient pressure and therefore a larger membrane deflection. This effect increases the stiffness of the plate material and can be considered by nonlinear compliance in the circuit model. We study the interaction of the compliance nonlinearity and nonlinearity of transduction force and show that transduction overwhelms the compliance nonlinearity. To match the simulation results with the admittance measurements we implement a very accurate model-based characterization approach where we modify the equivalent circuit model. In the modified circuit model, we introduced new elements to include loss mechanisms. Also, we changed the dimension parameters used in the simulation to compensate the difference in the resonance frequency and amplitude.Item Open Access Alternative approaches and noise benefits in hypothesis-testing problems in the presence of partial information(Bilkent University, 2011) Bayram, Suat; Gezici, SinanPerformance of some suboptimal detectors can be enhanced by adding independent noise to their observations. In the first part of the dissertation, the effects of additive noise are studied according to the restricted Bayes criterion, which provides a generalization of the Bayes and minimax criteria. Based on a generic M-ary composite hypothesis-testing formulation, the optimal probability distribution of additive noise is investigated. Also, sufficient conditions under which the performance of a detector can or cannot be improved via additive noise are derived. In addition, simple hypothesis-testing problems are studied in more detail, and additional improvability conditions that are specific to simple hypotheses are obtained. Furthermore, the optimal probability distribution of the additive noise is shown to include at most M mass points in a simple M-ary hypothesis-testing problem under certain conditions. Then, global optimization, analytical and convex relaxation approaches are considered to obtain the optimal noise distribution. Finally, detection examples are presented to investigate the theoretical results. In the second part of the dissertation, the effects of additive noise are studied for M-ary composite hypothesis-testing problems in the presence of partial prior information. Optimal additive noise is obtained according to two criteria, which assume a uniform distribution (Criterion 1) or the least-favorable distribution (Criterion 2) for the unknown priors. The statistical characterization of the optimal noise is obtained for each criterion. Specifically, it is shown that the optimal noise can be represented by a constant signal level or by a randomization of a finite number of signal levels according to Criterion 1 and Criterion 2, respectively. In addition, the cases of unknown parameter distributions under some composite hypotheses are considered, and upper bounds on the risks are obtained. Finally, a detection example is provided to illustrate the theoretical results. In the third part of the dissertation, the effects of additive noise are studied for binary composite hypothesis-testing problems. A Neyman-Pearson (NP) framework is considered, and the maximization of detection performance under a constraint on the maximum probability of false-alarm is studied. The detection performance is quantified in terms of the sum, the minimum and the maximum of the detection probabilities corresponding to possible parameter values under the alternative hypothesis. Sufficient conditions under which detection performance can or cannot be improved are derived for each case. Also, statistical characterization of optimal additive noise is provided, and the resulting false-alarm probabilities and bounds on detection performance are investigated. In addition, optimization theoretic approaches for obtaining the probability distribution of optimal additive noise are discussed. Finally, a detection example is presented to investigate the theoretical results. Finally, the restricted NP approach is studied for composite hypothesistesting problems in the presence of uncertainty in the prior probability distribution under the alternative hypothesis. A restricted NP decision rule aims to maximize the average detection probability under the constraints on the worstcase detection and false-alarm probabilities, and adjusts the constraint on the worst-case detection probability according to the amount of uncertainty in the prior probability distribution. Optimal decision rules according to the restricted NP criterion are investigated, and an algorithm is provided to calculate the optimal restricted NP decision rule. In addition, it is observed that the average detection probability is a strictly decreasing and concave function of the constraint on the minimum detection probability. Finally, a detection example is presented, and extensions to more generic scenarios are discussed.Item Open Access Analysis and control of periodic gaits in legged robots(Bilkent University, 2017-11) Hamzaçebi, Hasan; Morgül, ÖmerThe analysis, identi cation and control of legged locomotion have been an interest for various researchers towards building legged robots that move like the animals do in nature. The extensive studies on understanding legged locomotion led to some mathematical models, such as the Spring-Loaded Inverted Pendulum (SLIP) template (and its various derivatives), that can be used to identify, analyze and control legged locomotor systems. Despite their seemingly simple nature, as being a simple point mass attached to a massless spring from dynamics perspective, the SLIP model constitutes a restricted three-body problem formulation, whose non-integrability has been proven long before. Thus, researchers came up with approximate analytical solutions or they used some other different techniques such as partial feedback linearization for the sake of obtaining analytical Poincar e return maps that govern the motion of the desired legged locomotor system. In the first part of this thesis, we consider a SLIP-based legged locomotion model, which we call as Multi-Actuated Dissipative SLIP (MD-SLIP) that extends the simple SLIP model with two additional actuators. The first one is a linear actuator attached serially to the leg spring to ensure direct control on the compression and decompression of the leg spring. The second actuator is a rotatory one that is attached to hip, which provides ability to inject some torque inputs to the system dynamics, which is mainly inspired by biological legged locomotor systems. Following the analysis of MD-SLIP model, we utilize a partial feedback linearization strategy by which we can cancel some nonlinear dynamics of the legged locomotion model and obtain exact analytical solutions without needing any approximation. Having exact analytical solutions is crucial to investigate stability characteristics of the MD-SLIP model during its hopping gait behavior. We illustrate and compare the applicability of our solutions with open-loop and closedloop hopping performances on various rough terrain simulations. Finally, we show how the MD-SLIP model can be anchored to bipedal legged locomotion models, where we assign two independent MD-SLIP models to each leg and investigate the system performance under their simultaneous but independent control. The proposed bipedal legged locomotion model is called as Multi-Actuated Dissipative Bipedal SLIP (MDB-SLIP) model. The key idea here is that we can still utilize the partial feedback linearization concept that we applied for the original MD-SLIP model and ensure exact analytical solutions for the MDB-SLIP model as well. We also provide detailed investigations for open-loop and closed-loop walking gait performance of the MDB-SLIP model on different noisy terrain profiles.Item Open Access Analysis and design of switching and fuzzy systems(Bilkent University, 2002-09) Akgül, Murat; Morgül, ÖmerIn this thesis we consider the controller design problems for switching and fuzzy systems. In switching systems, the system dynamics and/or control input take dierent forms in different parts of the underlying state space. In fuzzy systems, the system dynamics and/or control input consist of certain logical expressions. From this point of view, it is reasonable to expect certain similarities between these systems. We show that under certain conditions, a switching system may be converted into an equivalent fuzzy system. While the changes in the system variables in a switching system may be abrupt, such changes are typically smooth in a fuzzy system. Therefore obtaining such an equivalent fuzzy system may inherit the stability properties of the original switching system while smoothing the system dynamics. Motivated from this idea we propose various switching strategies for certain classes of nonlinear systems and provide some stability results. Due to the dificulties in designing such switching rules for nonlinear systems, most of the results are developed for certain specific type of systems. Due to the logical structure, obtaining rigorous stability results are very difficult for fuzzy systems. We propose a fuzzy controller design method and prove a stability result under certain conditions. The proposed method may also be applied to function approximation. We also consider a different stabilization method, namely phase portrait matching, in which the main aim is to choose the control input appropriately so that the dynamics of the closed-loop system is close to a given desired dynamics. If this is achieved, then the phase portrait of the closed-loop system will also be close to a desired phase portrait. We propose various schemes to achieve this task.Item Open Access Analysis of current induction on thin conductors inside the body during MRI scan(Bilkent University, 2014) Açıkel, Volkan; Atalar, ErginThe aim of this thesis is to develop a method to analyze currents on thin conductor structures inside the body during Magnetic Resonance Imaging (MRI) scan based on Modified Transmission Line Method (MoTLiM). In this thesis, first, Active Implantable Medical Devices (AIMDs) are modeled and the tissue heating problem, which is a result of coupling between AIMD and incident Radio Frequency (RF) fields, is examined. Then, usage of MoTLiM to analyze the currents on the guidewires is shown by solving currents on guidewire when a toroidal transmit receive coil is used with guidewire. At first, a method to measure MoTLiM parameters of leads using a network analyzer is shown. Then, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the Modi- fied Transmission Line Method (MoTLiM) and the Methods of Moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected to any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments are used. The induced currents on the leads with the IPG case or electrode connections are solved using the proposed models and MoTLiM. These results are compared with the MoM simulations. In addition, an electrode is connected to a lead via an inductor. The dissipated power on the electrode is calculated using MoTLiM by changing the inductance and the results are compared with the specific absorption rate results that are obtained using MoM. Then, MRI experiments are conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead is connected to the case and placed inside a uniform phantom. During a MRI scan the temperature rise at the lead is measured by changing the lead length. The power at the lead tip for the same scenario is also calculated using the IPG case model and MoTLiM. Then an electrode is connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan the temperature rise at the electrode is measured by changing the inductance and compared with the dissipated power on the electrode resistance. Second, based on the similarity between currents on guidewires and transmission lines, currents on the catheter are solved with MoTLiM. Current distributions on an insulated guidewire are solved and B1 distribution along the catheter is calculated. Effect of stripping the tip on the tip visibility is analyzed. It is shown that there is an increase in the B1 at the insulation and bare guidewire boundary. Then, a characteristic impedance is defined for the guidewires and impedance seen at the point where guidewire is inserted into the body is calculated. It is shown with EM simulations that if the impedance converges to the characteristic impedance of the guidewire, tip visibility of the guidewire is lost. At last, a new method to measure electrical properties of a phantom material is proposed. This method is used for validation of the coaxial transmission line measurement (CTLM) fixture, which is designed for measurement of electrical properties of viscous phantom materials at MRI frequencies, and which is previously presented by our group. The new method depends on the phenomena of the lead tip heating inside a phantom during MRI scan. Electrical properties of a phantom are influential on the relationship between tip temperature increase and the lead length. MoTLiM is used and the relationship between the lead length and the tip temperature increase is formulated as a function of conductivity and permittivity of the phantom. By changing the lead length, the tip temperature increase is measured and the MoTLiM formulation is fitted to these data to find the electrical properties of the phantom. Afterwards the electrical properties of the phantom are measured with the CTLM fixture and the results that are obtained with both methods are compared for an error analysis. To sum, electrical models for the IPG case and electrode are suggested, and the method is proposed to determine the parameter values. The effect of the IPG case and electrode on tip heating can be predicted using the proposed theory. An analytical analysis of guidewire with toroidal transceiver is shown. This analysis is helpful for better usage and improvements of toroidal transceiver. Also, MoTLiM analysis can be extended to other MRI guidewire antennas.Item Open Access Analysis of cylindrical reflector antennas in the presence of circular radomes by complex source-dual series approach(Bilkent University, 1996) Oğuzer, Taner; Altıntaş, AyhanThe radiation from cylindrical reflector antennas is analyzed in an accurate manner for both H and E polarization cases. The problem is first formulated in terms of the dual series equations and then it is regularized by the Riemann Hilbert Problem technique. The resulting matrix equation is solved numerically with a guaranteed accuracy, and remarkably little CPU time is needed. The feed directivity is included in the analysis by the complex source point method. Various characteristic patterns are obtained for the front and offset-fed reflector antenna geometries with this analysis and some comparisons are made with the high frequency techniques. The directivity and radiated power properties are also studied. Furthermore, the results are also compared by the .Method Of Moments and Physical Optics solutions. Then the case of circular radome enclosing the reflector is considered. Radomes concentric with the reflector are examined first, followed by the non-concentric radomes.Item Open Access Analysis of Gaussian-beam pumped optical parametric amplifiers for the generation of squeezed states of light(Bilkent University, 2002) Köprülü, Kahraman Güçlü; Aytür, OrhanItem Open Access An analytical model of IEEE 80211 DCF for multi-hop wireless networks and its application to goodput and energy analysis(Bilkent University, 2010) Aydoğdu, Canan; Karaşan, EzhanIn this thesis, we present an analytical model for the IEEE 802.11 DCF in multihop networks that considers hidden terminals and works for a large range of traffic loads. A goodput model which considers rate reduction due to collisions, retransmissions and hidden terminals, and an energy model, which considers energy consumption due to collisions, retransmissions, exponential backoff and freezing mechanisms, and overhearing of nodes, are proposed and used to analyze the goodput and energy performance of various routing strategies in IEEE 802.11 DCF based multi-hop wireless networks. Moreover, an adaptive routing algorithm which determines the optimum routing strategy adaptively according to the network and traffic conditions is suggested. Viewed from goodput aspect the results are as follows: Under light traf- fic, arrival rate of packets is dominant, making any routing strategy equivalently optimum. Under moderate traffic, concurrent transmissions dominate and multihop transmissions become more advantageous. At heavy traffic, multi-hoppingbecomes unstable due to increased packet collisions and excessive traffic congestion, and direct transmission increases goodput. From a throughput aspect, it is shown that throughput is topology dependent rather than traffic load dependent, and multi-hopping is optimum for large networks whereas direct transmissions may increase the throughput for small networks. Viewed from energy aspect similar results are obtained: Under light traf- fic, energy spent during idle mode dominates in the energy model, making any routing strategy nearly optimum. Under moderate traffic, energy spent during idle and receive modes dominates and multi-hop transmissions become more advantageous as the optimum hop number varies with processing power consumed at intermediate nodes. At the very heavy traffic conditions, multi-hopping becomes unstable due to increased collisions and direct transmission becomes more energy-efficient.The choice of hop-count in routing strategy is observed to affect energyefficiency and goodput more for large and homogeneous networks where it is possible to use shorter hops each covering similar distances. The results indicate that a cross-layer routing approach, which takes energy expenditure due to MAC contentions into account and dynamically changes the routing strategy according to the network traffic load, can increase goodput by at least 18% and save energy by at least 21% in a realistic wireless network where the network traffic load changes in time. The goodput gain increases up to 222% and energy saving up to 68% for denser networks where multi-hopping with much shorter hops becomes possible.Item Open Access Anomaly detection in diverse sensor networks using machine learning(Bilkent University, 2022-01) Akyol, Ali Alp; Arıkan, OrhanEarthquake precursor detection is one of the oldest research areas that has the potential of saving human lives. Recent studies have enlightened the fact that strong seismic activities and earthquakes affect the electron distribution of the ionosphere. These effects are clearly observable on the ionospheric Total Electron Content (TEC) that shall be measured by using the satellite position data of the Global Navigation Satellite System (GNSS). In this dissertation, several earthquake precursor detection techniques are proposed and their precursor detection performances are investigated on TEC data obtained from different sensor networks. First, a model based earthquake precursor detection technique is proposed to detect precursors of the earthquakes with magnitudes greater than 5 in the vicinity of Turkey. Precursor detection and TEC reliability signals are generated by using ionospheric TEC variations. These signals are thresholded to obtain earthquake precursor decisions. Earthquake precursor detections are made by using Particle Swarm Optimization (PSO) technique on these precursor decisions. Performance evaluations show that the proposed technique is able to detect 14 out of 23 earthquake precursors of magnitude larger than 5 in Richter scale while generating 8 false precursor decisions. Second, a machine learning based earthquake precursor detection technique, EQ-PD is proposed to detect precursors of the earthquakes with magnitudes greater than 4 in the vicinity of Italy. Spatial and spatio-temporal anomaly detection thresholds are obtained by using the statistics of TEC variation during seismically active times and applied on TEC variation based anomaly detection signal to form precursor decisions. Resulting spatial and spatio-temporal anomaly decisions are fed to a Support Vector Machine (SVM) classifier to generate earthquake precursor detections. When the precursor detection performance of the EQ-PD is investigated, it is observed that the technique is able to detect 22 out of 24 earthquake precursors while generating 13 false precursor decisions during 147 days of no-seismic activity. Last, a deep learning based earthquake precursor detection technique, DLPD is proposed to detect precursors of the earthquakes with magnitudes greater than 5.4 in the vicinity Anatolia region. The DL-PD technique utilizes a deep neural network with spatio-temporal Global Ionospheric Map (GIM)-TEC data estimation capabilities. GIM-TEC anomaly score is obtained by comparing GIMTEC estimates with GIM-TEC recordings. Earthquake precursor detections are generated by thresholding the GIM-TEC anomaly scores. Precursor detection performance evaluations show that DL-PD shall detect 5 out of 7 earthquake precursors while generating 1 false precursor decision during 416 days of noseismic activity.Item Open Access Antenna analysis(Bilkent University, 2009) Tunç, Celal Alp; Altıntaş, AyhanMultiple-input-multiple-output (MIMO) wireless communication systems have been attracting huge interest, since a boost in the data rate was shown to be possible, using multiple antennas both at the transmitter and receiver. It is obvious that the electromagnetic effects of the multiple antennas have to be included in the wireless channel for an accurate system design, though they are often neglected by the early studies. In this thesis, the MIMO channel is investigated from an electromagnetics point of view. A full-wave channel model based on the method of moments solution of the electric field integral equation is developed and used in order to evaluate the MIMO channel matrix accurately. The model is called the channel model with electric fields (MEF) and it calculates the exact fields via the radiation integrals, and hence, it is rigorous except the random scatterer environment. The accuracy of the model is further verified by the measurement results. Thus, it is concluded that MEF achieves the accuracy over other approaches which are incapable of analyzing antenna effects in detail. Making use of the presented technique, MIMO performance of printed dipole arrays is analyzed. Effects of the electrical properties of printed dipoles on the MIMO capacity are explored in terms of the relative permittivity and thickness of the dielectric material. Appropriate dielectric slab configurations yielding high capacity printed dipole arrays are presented. The numerical efficiency of the technique (particularly for freestanding and printed dipoles) allows analyzing MIMO performance of arrays with large number of antennas, and high performance array design in conjunction with well-known optimization tools. Thus, MEF is combined with particle swarm optimization (PSO) to design MIMO arrays of dipole elements for superior capacity. Freestanding and printed dipole arrays are analyzed and optimized, and the adaptive performance of printed dipole arrays in the MIMO channel is investigated. Furthermore, capacity achieving input covariance matrices for different types of arrays are obtained numerically using PSO in conjunction with MEF. It is observed that, moderate capacity improvement is possible for small antenna spacing values where the correlation is relatively high, mainly utilizing nearly full or full covariance matrices. Otherwise, the selection of the diagonal covariance is almost the optimal solution.MIMO performance of printed rectangular patch arrays is analyzed using a modified version of MEF. Various array configurations are designed, manufactured, and their MIMO performance is measured in an indoor environment. The channel properties, such as the power delay profile, mean excess delay and delay spread, are obtained via measurements and compared with MEF results. Very good agreement is achieved.Item Open Access Applications of electromagnetic phenomena in periodic structures(Bilkent University, 2012) Çakmak, Atilla ÖzgürItem Embargo Artificial intelligence-based hybrid anomaly detection and clinical decision support techniques for automated detection of cardiovascular diseases and Covid-19(Bilkent University, 2023-10) Terzi, Merve Begüm; Arıkan, OrhanCoronary artery diseases are the leading cause of death worldwide, and early diagnosis is crucial for timely treatment. To address this, we present a novel automated arti cial intelligence-based hybrid anomaly detection technique com posed of various signal processing, feature extraction, supervised, and unsuper vised machine learning methods. By jointly and simultaneously analyzing 12-lead electrocardiogram (ECG) and cardiac sympathetic nerve activity (CSNA) data, the automated arti cial intelligence-based hybrid anomaly detection technique performs fast, early, and accurate diagnosis of coronary artery diseases. To develop and evaluate the proposed automated arti cial intelligence-based hybrid anomaly detection technique, we utilized the fully labeled STAFF III and PTBD databases, which contain 12-lead wideband raw recordings non invasively acquired from 260 subjects. Using the wideband raw recordings in these databases, we developed a signal processing technique that simultaneously detects the 12-lead ECG and CSNA signals of all subjects. Subsequently, using the pre-processed 12-lead ECG and CSNA signals, we developed a time-domain feature extraction technique that extracts the statistical CSNA and ECG features critical for the reliable diagnosis of coronary artery diseases. Using the extracted discriminative features, we developed a supervised classi cation technique based on arti cial neural networks that simultaneously detects anomalies in the 12-lead ECG and CSNA data. Furthermore, we developed an unsupervised clustering technique based on the Gaussian mixture model and Neyman-Pearson criterion that performs robust detection of the outliers corresponding to coronary artery diseases. By using the automated arti cial intelligence-based hybrid anomaly detection technique, we have demonstrated a signi cant association between the increase in the amplitude of CSNA signal and anomalies in ECG signal during coronary artery diseases. The automated arti cial intelligence-based hybrid anomaly de tection technique performed highly reliable detection of coronary artery diseases with a sensitivity of 98.48%, speci city of 97.73%, accuracy of 98.11%, positive predictive value (PPV) of 97.74%, negative predictive value (NPV) of 98.47%, and F1-score of 98.11%. Hence, the arti cial intelligence-based hybrid anomaly detection technique has superior performance compared to the gold standard diagnostic test ECG in diagnosing coronary artery diseases. Additionally, it out performed other techniques developed in this study that separately utilize either only CSNA data or only ECG data. Therefore, it signi cantly increases the detec tion performance of coronary artery diseases by taking advantage of the diversity in di erent data types and leveraging their strengths. Furthermore, its perfor mance is comparatively better than that of most previously proposed machine and deep learning methods that exclusively used ECG data to diagnose or clas sify coronary artery diseases. It also has a very short implementation time, which is highly desirable for real-time detection of coronary artery diseases in clinical practice. The proposed automated arti cial intelligence-based hybrid anomaly detection technique may serve as an e cient decision-support system to increase physicians' success in achieving fast, early, and accurate diagnosis of coronary artery diseases. It may be highly bene cial and valuable, particularly for asymptomatic coronary artery disease patients, for whom the diagnostic information provided by ECG alone is not su cient to reliably diagnose the disease. Hence, it may signi cantly improve patient outcomes, enable timely treatments, and reduce the mortality associated with cardiovascular diseases. Secondly, we propose a new automated arti cial intelligence-based hybrid clinical decision support technique that jointly analyzes reverse transcriptase polymerase chain reaction (RT-PCR) curves, thorax computed tomography im ages, and laboratory data to perform fast and accurate diagnosis of Coronavirus disease 2019 (COVID-19). For this purpose, we retrospectively created the fully labeled Ankara University Faculty of Medicine COVID-19 (AUFM-CoV) database, which contains a wide variety of medical data, including RT-PCR curves, thorax computed tomogra phy images, and laboratory data. The AUFM-CoV is the most comprehensive database that includes thorax computed tomography images of COVID-19 pneu monia (CVP), other viral and bacterial pneumonias (VBP), and parenchymal lung diseases (PLD), all of which present signi cant challenges for di erential diagnosis. We developed a new automated arti cial intelligence-based hybrid clinical de cision support technique, which is an ensemble learning technique consisting of two preprocessing methods, long short-term memory network-based deep learning method, convolutional neural network-based deep learning method, and arti cial neural network-based machine learning method. By jointly analyzing RT-PCR curves, thorax computed tomography images, and laboratory data, the proposed automated arti cial intelligence-based hybrid clinical decision support technique bene ts from the diversity in di erent data types that are critical for the reliable detection of COVID-19 and leverages their strengths. The multi-class classi cation performance results of the proposed convolu tional neural network-based deep learning method on the AUFM-CoV database showed that it achieved highly reliable detection of COVID-19 with a sensitivity of 91.9%, speci city of 92.5%, precision of 80.4%, and F1-score of 86%. There fore, it outperformed thorax computed tomography in terms of the speci city of COVID-19 diagnosis. Moreover, the convolutional neural network-based deep learning method has been shown to very successfully distinguish COVID-19 pneumonia (CVP) from other viral and bacterial pneumonias (VBP) and parenchymal lung diseases (PLD), which exhibit very similar radiological ndings. Therefore, it has great potential to be successfully used in the di erential diagnosis of pulmonary dis eases containing ground-glass opacities. The binary classi cation performance results of the proposed convolutional neural network-based deep learning method showed that it achieved a sensitivity of 91.5%, speci city of 94.8%, precision of 85.6%, and F1-score of 88.4% in diagnosing COVID-19. Hence, it has compara ble sensitivity to thorax computed tomography in diagnosing COVID-19. Additionally, the binary classi cation performance results of the proposed long short-term memory network-based deep learning method on the AUFM-CoV database showed that it performed highly reliable detection of COVID-19 with a sensitivity of 96.6%, speci city of 99.2%, precision of 98.1%, and F1-score of 97.3%. Thus, it outperformed the gold standard RT-PCR test in terms of the sensitivity of COVID-19 diagnosis Furthermore, the multi-class classi cation performance results of the proposed automated arti cial intelligence-based hybrid clinical decision support technique on the AUFM-CoV database showed that it diagnosed COVID-19 with a sen sitivity of 66.3%, speci city of 94.9%, precision of 80%, and F1-score of 73%. Hence, it has been shown to very successfully perform the di erential diagnosis of COVID-19 pneumonia (CVP) and other pneumonias. The binary classi cation performance results of the automated arti cial intelligence-based hybrid clinical decision support technique revealed that it diagnosed COVID-19 with a sensi tivity of 90%, speci city of 92.8%, precision of 91.8%, and F1-score of 90.9%. Therefore, it exhibits superior sensitivity and speci city compared to laboratory data in COVID-19 diagnosis. The performance results of the proposed automated arti cial intelligence-based hybrid clinical decision support technique on the AUFM-CoV database demon strate its ability to provide highly reliable diagnosis of COVID-19 by jointly ana lyzing RT-PCR data, thorax computed tomography images, and laboratory data. Consequently, it may signi cantly increase the success of physicians in diagnosing COVID-19, assist them in rapidly isolating and treating COVID-19 patients, and reduce their workload in daily clinical practice.Item Open Access Broadband GaN LNA MMIC development with the micro/nano process development by kink-effect in S22 consideration(Bilkent University, 2021-01) Osmanoğlu, Sinan; Özbay, EkmelBroadband low noise amplifiers (LNA) are one of the key components of the nu-merous applications such as communication, electronic warfare, and radar. The requirements for higher bandwidth, higher speed, higher survivability, higher re-liability, etc. pushes the technological boundaries. The demand for high per-formance circuit components without a compromise stimulates the utilization of the high-end gallium nitride (GaN) technology to develop better monolithic microwave integrated circuits (MMIC) in a smaller footprint. To support the progress, the development of a proper GaN high electron mobility transistor (HEMT) technology and proper circuit models have become critical. To support the efforts and contribute to the progress, a 0.25 µm microstrip (MS) GaN HEMT technology is developed in Bilkent University Nanotechnology Research Center (NANOTAM). The technology development yields that the MS GaN HEMT tech-nology is capable of supporting ≥4.4 W/mm output power (POUT ), ≥50% power added efficiency (PAE), ≥15 dB gain, and ∼1 dB noise figure (NF ) at 10 GHz. Moreover, the gate structure of the technology is studied by evaluating the kink-effect (KE) in the output reflection coefficient (S22) of a HEMT to support the broadband operation. Besides the technology development, the small-signal (SS) and noise equivalent circuit models are studied, and the developed models present high convergence with the measurements. The accuracy of the models contributes to development of the cascode HEMT based LNAs even without fabricating the cascode HEMT. Furthermore, the developed models and the proper gate struc-ture are used to develop the broadband quad-flat no-leads (QFN) packaged GaN LNA MMIC for the mobile radio communications, the military radar, and the commercial radar applications. The results of the circuit models and the GaN LNA MMIC also yield that the developed MS GaN HEMT technology is capable for developing different solutions up to 18 GHz.Item Open Access Calculation of scalar optical diffraction field from its distributed samples over the space(Bilkent University, 2010) Esmer, Gökhan Bora; Onural, LeventAs a three-dimensional viewing technique, holography provides successful threedimensional perceptions. The technique is based on duplication of the information carrying optical waves which come from an object. Therefore, calculation of the diffraction field due to the object is an important process in digital holography. To have the exact reconstruction of the object, the exact diffraction field created by the object has to be calculated. In the literature, one of the commonly used approach in calculation of the diffraction field due to an object is to superpose the fields created by the elementary building blocks of the object; such procedures may be called as the “source model” approach and such a computed field can be different from the exact field over the entire space. In this work, we propose four algorithms to calculate the exact diffraction field due to an object. These proposed algorithms may be called as the “field model” approach. In the first algorithm, the diffraction field given over the manifold, which defines the surface of the object, is decomposed onto a function set derived from propagating plane waves. Second algorithm is based on pseudo inversion of the systemmatrix which gives the relation between the given field samples and the field over a transversal plane. Third and fourth algorithms are iterative methods. In the third algorithm, diffraction field is calculated by a projection method onto convex sets. In the fourth algorithm, pseudo inversion of the system matrix is computed by conjugate gradient method. Depending on the number and the locations of the given samples, the proposed algorithms provide the exact field solution over the entire space. To compute the exact field, the number of given samples has to be larger than the number of plane waves that forms the diffraction field over the entire space. The solution is affected by the dependencies between the given samples. To decrease the dependencies between the given samples, the samples over the manifold may be taken randomly. Iterative algorithms outperforms the rest of them in terms of computational complexity when the number of given samples are larger than 1.4 times the number of plane waves forming the diffraction field over the entire space.Item Open Access Cancer imaging and treatment monitoring with color magnetic particle imaging(Bilkent University, 2021-09) Ütkür, Mustafa; Çukur, Ülkü SarıtaşMagnetic particle imaging (MPI) is emerging as a highly promising non-invasive tomographic imaging modality for cancer research. Superparamagnetic iron oxide nanoparticles (SPIONs) are used as imaging tracers in MPI. By exploiting the relaxation behavior of SPIONs, the capabilities of MPI can also be broadened to functional imaging applications that can distinguish different nanoparticles and/or environments. One of the important applications of functional MPI is viscosity mapping, since certain cancer types are shown to have increased cellular viscosity levels. MPI can potentially detect these cancerous tissues through estimating the viscosity levels of the tissue environment. Another important application area of MPI is temperature mapping, since SPIONs are also utilized in magnetic fluid hyperthermia (MFH) treatments and MPI enables localized application of MFH. To achieve accurate temperature estimations, however, one must also take into account the confounding effects of viscosity and temperature on the MPI signal. This dissertation studies relaxation-based viscosity and temperature mapping with MPI, covering the biologically relevant viscosity range (<5 mPa·s) and the therapeutically applicable temperature range (25-45!C). The characterization of the SPION relaxation response was performed on an in-house arbitrarywaveform magnetic particle spectrometer (MPS) setup, and the imaging experiments were performed on an in-house MPI scanner. Both the MPS setup and the MPI scanner were designed and developed as parts of this thesis. The effects of viscosity and temperature on relaxation time constant estimations were investigated, and the sensitivities of MPI to these functional parameters were determined at a wide range of operating points. The relaxation time constants, t’s, were estimated with a technique called TAURUS (TAU, t, estimation via Recovery of Underlying mirror Symmetry), which is based on a linear relaxation equation. Although the nonlinear relaxation behaviors of the SPIONs are highly dependent on the excitation field parameters, SPION type, and the hardware configuration, the results suggest that one-to-one relation between the estimated t and the targeted functional parameters (i.e., viscosity or temperature) can be obtained. According to these results, MPI can successfully map viscosity and temperature, with higher than 30%/mPa/s sensitivity for viscosity mapping and approximately 10%/!C sensitivity for temperature mapping, at 10 kHz drive field frequency. In addition, the results suggest that the simultaneous mapping of viscosity and temperature can be achieved by performing multiple measurements at different drive field frequencies and/or amplitudes. Overall, these findings show that hybrid MPI-MFH systems offer a promising approach for real-time monitored and localized thermal ablation treatment of cancer. The viscosity and temperature mapping capabilities of MPI via relaxation time constant estimation can provide feedback for high accuracy thermal dose adjustment to the cancerous tissues, thereby, increasing the efficacy of the treatment.