Browsing by Subject "Algorithm"
Now showing 1 - 20 of 34
- Results Per Page
- Sort Options
Item Open Access Accelerated phase-cycled SSFP imaging with compressed sensing(Institute of Electrical and Electronics Engineers Inc., 2015) Çukur, T.Balanced steady-state free precession (SSFP) imaging suffers from irrecoverable signal losses, known as banding artifacts, in regions of large B0 field inhomogeneity. A common solution is to acquire multiple phase-cycled images each with a different frequency sensitivity, such that the location of banding artifacts are shifted in space. These images are then combined to alleviate signal loss across the entire field-of-view. Although high levels of artifact suppression are viable using a large number of images, this is a time costly process that limits clinical utility. Here, we propose to accelerate individual acquisitions such that the overall scan time is equal to that of a single SSFP acquisition. Aliasing artifacts and noise are minimized by using a variable-density random sampling pattern in k-space, and by generating disjoint sampling patterns for separate acquisitions. A sparsity-enforcing method is then used for image reconstruction. Demonstrations on realistic brain phantom images, and in vivo brain and knee images are provided. In all cases, the proposed technique enables robust SSFP imaging in the presence of field inhomogeneities without prolonging scan times. © 2014 IEEE.Item Open Access Activity management algorithm for improving energy efficiency of small cell base stations in 5G heterogeneous networks(2014) Aykın, IrmakHeterogeneous networks (HetNets) are proposed in order to meet the increasing demand for next generation cellular wireless networks, but they also increase the energy consumption of the base stations. In this thesis, an activity management algorithm for improving the energy efficiency of HetNets is proposed. A smart sleep strategy is employed for the operator deployed pico base stations to enter sleep and active modes. According to that strategy, when the number of users exceeds the turn on threshold, the pico node becomes active and when the number of users drop below the turn off threshold, it goes into sleep mode. Mobile users dynamically enter and leave the cells, triggering the activation and deactivation of pico base stations. The performance of the system is examined for three different cellular network architectures: cell on edge (COE), uniformly distributed cells (UDC) and macro cell only network (MoNet). Two different user distributions are considered: uniform and hotspot. The effects of number of hotspot users and sleep energies of pico nodes on the energy efficiency are also investigated. The proposed activity management algorithm increases the energy efficiency, measured in bits/J, by 20%. The average bit rates achieved by HetNet users increase by 29% compared with the MoNet architecture. Thus, the proposed activity control algorithm increases the spectral efficiency of the network while consuming the energy more efficiently.Item Open Access Algebraic reconstraction for 3D magnetic resonance-electrical impedance tomography (MREIT) using one component of magnetic flux density(Institute of Physics and Engineering in Medicine, 2004) Ider, Y. Z.; Onart, S.Magnetic resonance-electrical impedance tomography (MREIT) algorithms fall into two categories: those utilizing internal current density and those utilizing only one component of measured magnetic flux density. The latter group of algorithms have the advantage that the object does not have to be rotated in the magnetic resonance imaging (MRI) system. A new algorithm which uses only one component of measured magnetic flux density is developed. In this method, the imaging problem is formulated as the solution of a non-linear matrix equation which is solved iteratively to reconstruct resistivity. Numerical simulations are performed to test the algorithm both for noise-free and noisy cases. The uniqueness of the solution is monitored by looking at the singular value behavior of the matrix and it is shown that at least two current injection profiles are necessary. The method is also modified to handle region-of-interest reconstructions. In particular it is shown that, if the image of a certain xy-slice is sought for, then it suffices to measure the z-component of magnetic flux density up to a distance above and below that slice. The method is robust and has good convergence behavior for the simulation phantoms used.Item Open Access Analytical regularization based analysis of a spherical reflector symmetrically illuminated by an acoustic beam(IEEE, 2000) Vinogradov, S. S.; Vinogradova, E. D.; Nosich, A. I.; Altintaş, A.A mathematically accurate and numerically efficient method of analysis of a spherical reflector, fed by a scalar beam produced by a complex source- point feed, is presented. Two cases, soft and hard reflector surface, are considered. In each case the solution of the full-wave integral equation is reduced to dual series equations and then further to a regularized infinite- matrix equation. The latter procedure is based on the analytical inversion of the static part of the problem. Sample numerical results for 50-λ reflectors demonstrate features that escape a high-frequency asymptotic analysis. (C) 2000 Acoustical Society of America.Item Open Access Application of the RIMARC algorithm to a large data set of action potentials and clinical parameters for risk prediction of atrial fibrillation(Springer, 2015) Ravens, U.; Katircioglu-Öztürk, D.; Wettwer, E.; Christ, T.; Dobrev, D.; Voigt, N.; Poulet, C.; Loose, S.; Simon, J.; Stein, A.; Matschke, K.; Knaut, M.; Oto, E.; Oto, A.; Güvenir, H. A.Ex vivo recorded action potentials (APs) in human right atrial tissue from patients in sinus rhythm (SR) or atrial fibrillation (AF) display a characteristic spike-and-dome or triangular shape, respectively, but variability is huge within each rhythm group. The aim of our study was to apply the machine-learning algorithm ranking instances by maximizing the area under the ROC curve (RIMARC) to a large data set of 480 APs combined with retrospectively collected general clinical parameters and to test whether the rules learned by the RIMARC algorithm can be used for accurately classifying the preoperative rhythm status. APs were included from 221 SR and 158 AF patients. During a learning phase, the RIMARC algorithm established a ranking order of 62 features by predictive value for SR or AF. The model was then challenged with an additional test set of features from 28 patients in whom rhythm status was blinded. The accuracy of the risk prediction for AF by the model was very good (0.93) when all features were used. Without the seven AP features, accuracy still reached 0.71. In conclusion, we have shown that training the machine-learning algorithm RIMARC with an experimental and clinical data set allows predicting a classification in a test data set with high accuracy. In a clinical setting, this approach may prove useful for finding hypothesis-generating associations between different parameters.Item Open Access Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images(IEEE, 2013) Arslan, S.; Ersahin, T.; Cetin-Atalay, R.; Gunduz-Demir, C.More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms. © 2012 IEEE.Item Open Access BRAPH: A graph theory software for the analysis of brain connectivity(Public Library of Science, 2017) Mijalkov, M.; Kakaei, E.; Pereira, J. B.; Westman, E.; Volpe, G.The brain is a large-scale complex network whose workings rely on the interaction between its various regions. In the past few years, the organization of the human brain network has been studied extensively using concepts from graph theory, where the brain is represented as a set of nodes connected by edges. This representation of the brain as a connectome can be used to assess important measures that reflect its topological architecture. We have developed a freeware MatLab-based software (BRAPH–BRain Analysis using graPH theory) for connectivity analysis of brain networks derived from structural magnetic resonance imaging (MRI), functional MRI (fMRI), positron emission tomography (PET) and electroencephalogram (EEG) data. BRAPH allows building connectivity matrices, calculating global and local network measures, performing non-parametric permutations for group comparisons, assessing the modules in the network, and comparing the results to random networks. By contrast to other toolboxes, it allows performing longitudinal comparisons of the same patients across different points in time. Furthermore, even though a user-friendly interface is provided, the architecture of the program is modular (object-oriented) so that it can be easily expanded and customized. To demonstrate the abilities of BRAPH, we performed structural and functional graph theory analyses in two separate studies. In the first study, using MRI data, we assessed the differences in global and nodal network topology in healthy controls, patients with amnestic mild cognitive impairment, and patients with Alzheimer’s disease. In the second study, using resting-state fMRI data, we compared healthy controls and Parkinson’s patients with mild cognitive impairment. © 2017 Mijalkov et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Item Open Access A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images(John Wiley & Sons, Inc., 2014) Arslan, S.; Ozyurek, E.; Gunduz Demir, C.Computer-based imaging systems are becoming important tools for quantitative assessment of peripheral blood and bone marrow samples to help experts diagnose blood disorders such as acute leukemia. These systems generally initiate a segmentation stage where white blood cells are separated from the background and other nonsalient objects. As the success of such imaging systems mainly depends on the accuracy of this stage, studies attach great importance for developing accurate segmentation algorithms. Although previous studies give promising results for segmentation of sparsely distributed normal white blood cells, only a few of them focus on segmenting touching and overlapping cell clusters, which is usually the case when leukemic cells are present. In this article, we present a new algorithm for segmentation of both normal and leukemic cells in peripheral blood and bone marrow images. In this algorithm, we propose to model color and shape characteristics of white blood cells by defining two transformations and introduce an efficient use of these transformations in a marker-controlled watershed algorithm. Particularly, these domain specific characteristics are used to identify markers and define the marking function of the watershed algorithm as well as to eliminate false white blood cells in a postprocessing step. Working on 650 white blood cells in peripheral blood and bone marrow images, our experiments reveal that the proposed algorithm improves the segmentation performance compared with its counterparts, leading to high accuracies for both sparsely distributed normal white blood cells and dense leukemic cell clusters. © 2014 International Society for Advancement of Cytometry.Item Open Access Content-based retrieval of historical Ottoman documents stored as textual images(IEEE, 2004) Şaykol, E.; Sinop, A. K.; Güdükbay, Uğur; Ulusoy, Özgür; Çetin, A. EnisThere is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.Item Open Access Cross-term-free time-frequency distribution reconstruction via lifted projections(Institute of Electrical and Electronics Engineers, 2015-01) Deprem, Z.; Çetin, A. EnisA crucial aspect of time-frequency (TF) analysis is the identification of separate components in a multicomponent signal. The Wigner-Ville distribution is the classical tool for representing such signals, but it suffers from cross-terms. Other methods, which are members of Cohen's class of distributions, also aim to remove the cross-terms by masking the ambiguity function (AF), but they result in reduced resolution. Most practical time-varying signals are in the form of weighted trajectories on the TF plane, and many others are sparse in nature. Therefore, in recent studies the problem is cast as TF distribution reconstruction using a subset of AF domain coefficients and sparsity assumption. Sparsity can be achieved by constraining or minimizing the l(1) norm. In this article, an l(1) minimization approach based on projections onto convex sets is proposed to obtain a high-resolution, cross-term-free TF distribution for a given signal. The new method does not require any parameter adjustment to obtain a solution. Experimental results are presented.Item Open Access Cumulant-based parametric multichannel FIR system identification methods(IEEE, 1993) Alshebeili, S. A.; Özgen, Mehmet Tankut; Çetin, A. Enis; Venetsanopoulos, A. N.In this paper, ''least squares'' and recursive methods for simultaneous identification of four nonminimum phase linear, time-invariant FIR systems are presented. The methods utilize the second- and fourth-order cumulants of outputs of the four FIR systems of which the common input is an independent, identically distributed (i.i.d.) non-Gaussian process. The new methods can be extended to the general problem of simultaneous identification of three or more FIR systems by choosing the order of the utilized cumulants to be equal to the number of systems. To illustrate the effectiveness of our methods, two simulation examples are included.Item Open Access Current constrained voltage scaled reconstruction (CCVSR) algorithm for MR-EIT and its performance with different probing current patterns(Institute of Physics Publishing, 2003) Birgül, Ö.; Eyüboğlu, B. M.; İder, Y. Z.Conventional injected-current electrical impedance tomography (EIT) and magnetic resonance imaging (MRI) techniques can be combined to reconstruct high resolution true conductivity images. The magnetic flux density distribution generated by the internal current density distribution is extracted from MR phase images. This information is used to form a fine detailed conductivity image using an Ohm's law based update equation. The reconstructed conductivity image is assumed to differ from the true image by a scale factor. EIT surface potential measurements are then used to scale the reconstructed image in order to find the true conductivity values. This process is iterated until a stopping criterion is met. Several simulations are carried out for opposite and cosine current injection patterns to select the best current injection pattern for a 2D thorax model. The contrast resolution and accuracy of the proposed algorithm are also studied. In all simulation studies, realistic noise models for voltage and magnetic flux density measurements are used. It is shown that, in contrast to the conventional EIT techniques, the proposed method has the capability of reconstructing conductivity images with uniform and high spatial resolution. The spatial resolution is limited by the larger element size of the finite element mesh and twice the magnetic resonance image pixel size.Item Open Access Data mining experiments on the Angiotensin II-Antagonist in Paroxysmal Atrial Fibrillation (ANTIPAF-AFNET 2) trial: ‘exposing the invisible’(Oxford University Press, 2016) Okutucu, S.; Katircioglu-Öztürk, D.; Oto, E.; Güvenir, H. A.; Karaagaoglu, E.; Oto, A.; Meinertz, T.; Goette, A.Aims: The aims of this study include (i) pursuing data-mining experiments on the Angiotensin II-Antagonist in Paroxysmal Atrial Fibrillation (ANTIPAF-AFNET 2) trial dataset containing atrial fibrillation (AF) burden scores of patients with many clinical parameters and (ii) revealing possible correlations between the estimated risk factors of AF and other clinical findings or measurements provided in the dataset. Methods: Ranking Instances by Maximizing the Area under a Receiver Operating Characteristics (ROC) Curve (RIMARC) is used to determine the predictive weights (Pw) of baseline variables on the primary endpoint. Chi-square automatic interaction detector algorithm is performed for comparing the results of RIMARC. The primary endpoint of the ANTIPAF-AFNET 2 trial was the percentage of days with documented episodes of paroxysmal AF or with suspected persistent AF. Results: By means of the RIMARC analysis algorithm, baseline SF-12 mental component score (Pw = 0.3597), age (Pw = 0.2865), blood urea nitrogen (BUN) (Pw = 0.2719), systolic blood pressure (Pw = 0.2240), and creatinine level (Pw = 0.1570) of the patients were found to be predictors of AF burden. Atrial fibrillation burden increases as baseline SF-12 mental component score gets lower; systolic blood pressure, BUN and creatinine levels become higher; and the patient gets older. The AF burden increased significantly at age >76. Conclusions: With the ANTIPAF-AFNET 2 dataset, the present data-mining analyses suggest that a baseline SF-12 mental component score, age, systolic blood pressure, BUN, and creatinine level of the patients are predictors of AF burden. Additional studies are necessary to understand the distinct kidney-specific pathophysiological pathways that contribute to AF burden. Published on behalf of the European Society of Cardiology.Item Open Access Electrostatics of Polymer Translocation Events in Electrolyte Solutions(American Institute of Physics Inc., 2016) Buyukdagli, S.; Ala-Nissila, T.We develop an analytical theory that accounts for the image and surface charge interactions between a charged dielectric membrane and a DNA molecule translocating through the membrane. Translocation events through neutral carbon-based membranes are driven by a competition between the repulsive DNA-image-charge interactions and the attractive coupling between the DNA segments on the trans and the cis sides of the membrane. The latter effect is induced by the reduction of the coupling by the dielectric membrane. In strong salt solutions where the repulsive image-charge effects dominate the attractive trans-cis coupling, the DNA molecule encounters a translocation barrier of ∼10 kBT. In dilute electrolytes, the trans-cis coupling takes over image-charge forces and the membrane becomes a metastable attraction point that can trap translocating polymers over long time intervals. This mechanism can be used in translocation experiments in order to control DNA motion by tuning the salt concentration of the solution.Item Open Access Experimental results for 2D magnetic resonance electrical impedance tomography (MR-EIT) using magnetic flux density in one direction(Institute of Physics Publishing, 2003) Birgül, Ö.; Eyüboğlu, B. M.; İder, Y. Z.Magnetic resonance electrical impedance tomography (MR-EIT) is an emerging imaging technique that reconstructs conductivity images using magnetic flux density measurements acquired employing MRI together with conventional EIT measurements. In this study, experimental MR-EIT images from phantoms with conducting and insulator objects are presented. The technique is implemented using the 0.15 T Middle East Technical University MRI system. The dc current method used in magnetic resonance current density imaging is adopted. A reconstruction algorithm based on the sensitivity matrix relation between conductivity and only one component of magnetic flux distribution is used. Therefore, the requirement for object rotation is eliminated. Once the relative conductivity distribution is found, it is scaled using the peripheral voltage measurements to obtain the absolute conductivity distribution. Images of several insulator and conductor objects in saline filled phantoms are reconstructed. The L2 norm of relative error in conductivity values is found to be 13%, 17% and 14% for three different conductivity distributions.Item Open Access Failed family photographs: errors left out from the algorithmic definition of perfect photographs(2018-06) Kaş, NaileThe subject matter of this study is defining and classifying “errors” left out from the algorithmic definition of “perfect photograph” in visual technologies and making understandable their relation with the developments in tools. New features that promote “perfect photographs” are added to cameras and applications and then they start to shape the perceptions. At this point, how the “perfect photograph” is defined is very important because it is very subjective topic varying from person to person and over time. The categorization is made according to algorithms used in digital cameras, cellphone cameras and applications. They are technical errors, timing errors and non-smile errors. The applications, the modes of cameras are exemplified and the predetermined standards of them are explained in this study.The introduction of advances in the field of image technology starts to change the standards in family photography. As a part of vernacular photographs, the family album takes its share from the effects of algorithms in cameras and applications. With the strong relation to memory, the elimination of errors in family albums has important meaning and results. As a practical side of the thesis, “Failed” is an interactive installation composed of family photographs. For the celebration of imperfections in family albums, an interactive and participative approach is preferred. All these “imperfections” in family albums that we could keep give me an inspiration to pursue the errors definition by algorithms.Item Open Access A fast neural-network algorithm for VLSI cell placement(Pergamon Press, 1998) Aykanat, Cevdet; Bultan, T.; Haritaoğlu, İ.Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average.Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average.Item Open Access Feature issue of digital holography and 3D imaging (DH) introduction(Optical Society of America (OSA), 2014-07) Hayasaki, Y.; Zhou, C.; Popescu, G.; Onural, LeventThe OSA Topical Meeting "Digital Holography and 3D Imaging (DH)," was held in Seattle, Washington, July 13-17, 2014. Feature issues based on the DH meeting series have been released by Applied Optics (AO) since 2007. This year Optics Express (OE) and AO jointly decided to have one such feature issue in each journal. The DH meeting will continue in the future, as expected, and the next meeting is scheduled to be held on 24-28 May 2015, in Shanghai Institute of Optics and Fine Mechanics, Shanghai, China. © 2014 Optical Society of AmericaItem Open Access Form and part through standard / non‐standard duality(2017-09) Kınayoğlu, GökhanThis thesis formulates its research through a two‐fold approach: it introduces a novel algorithm, but first it establishes the relationship model through which the assessment of the algorithm should be made. It discusses the intrinsic relation of “form” and “part” in architecture, through the analysis of concepts “standard” and “non‐standard.” The form is the overall shape of an object and part is the numerous constituents of form. “Standard” ‐which is a central trait for architecture‐ and “nonstandard” ‐a later introduction to architecture‐ consisting of various formal alternatives, are included for their important formal and constructional characteristics. All four concepts are studied in their historical contexts and in relation to secondary themes, like tectonics, mass‐production, and masscustomization. Simple essential techniques and various geometric formations in architecture are also covered through built examples to further demonstrate the aspects of “standard” and “non‐standard,” in terms of “form” and “part.” Based on these four concepts, a quadripartite relation is established. The relationship model formulates a significant interpretation and interrelation of the four concepts, hence creates an analytical framework. Through the findings of the quadripartite relation’s last partition, an algorithm is devised. The algorithm can generate various alternative infrastructure models for surfaces of revolution through several parameters. The findings demonstrate essential advantages in terms of standardization, material use, simplicity and ease of assembly. The algorithm can be altered slightly to adapt to other three partitions.Item Open Access Fourier transform magnetic resonance current density imaging (FT-MRCDI) from one component of magnetic flux density(IOP Publishing, 2010-05-17) Ider, Y. Z.; Birgul, O.; Oran, O. F.; Arıkan, Orhan; Hamamura, M. J.; Muftuler, L. T.Fourier transform (FT)-based algorithms for magnetic resonance current density imaging (MRCDI) from one component of magnetic flux density have been developed for 2D and 3D problems. For 2D problems, where current is confined to the xy-plane and z-component of the magnetic flux density is measured also on the xy-plane inside the object, an iterative FT-MRCDI algorithm is developed by which both the current distribution inside the object and the z-component of the magnetic flux density on the xy-plane outside the object are reconstructed. The method is applied to simulated as well as actual data from phantoms. The effect of measurement error on the spatial resolution of the current density reconstruction is also investigated. For 3D objects an iterative FT-based algorithm is developed whereby the projected current is reconstructed on any slice using as data the Laplacian of the z-component of magnetic flux density measured for that slice. In an injected current MRCDI scenario, the current is not divergence free on the boundary of the object. The method developed in this study also handles this situation.