Browsing by Subject "Image quality"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Open Access Accelerated phase-cycled SSFP imaging with compressed sensing(Institute of Electrical and Electronics Engineers Inc., 2015) Çukur, T.Balanced steady-state free precession (SSFP) imaging suffers from irrecoverable signal losses, known as banding artifacts, in regions of large B0 field inhomogeneity. A common solution is to acquire multiple phase-cycled images each with a different frequency sensitivity, such that the location of banding artifacts are shifted in space. These images are then combined to alleviate signal loss across the entire field-of-view. Although high levels of artifact suppression are viable using a large number of images, this is a time costly process that limits clinical utility. Here, we propose to accelerate individual acquisitions such that the overall scan time is equal to that of a single SSFP acquisition. Aliasing artifacts and noise are minimized by using a variable-density random sampling pattern in k-space, and by generating disjoint sampling patterns for separate acquisitions. A sparsity-enforcing method is then used for image reconstruction. Demonstrations on realistic brain phantom images, and in vivo brain and knee images are provided. In all cases, the proposed technique enables robust SSFP imaging in the presence of field inhomogeneities without prolonging scan times. © 2014 IEEE.Item Open Access Adaptive methods for dithering color images(Institute of Electrical and Electronics Engineers, 1997-07) Akarun, L.; Yardımcı, Y.; Çetin, A. EnisMost color image printing and display devices do not have the capability of reproducing true color images. A common remedy is the use of dithering techniques that take advantage of the lower sensitivity of the eye to spatial resolution and exchange higher color resolution with lower spatial resolution. In this paper, an adaptive error diffusion method for color images is presented. The error diffusion filter coefficients are updated by a normalized least mean square-type (LMS-type) algorithm to prevent textural contours, color impulses, and color shifts, which are among the most common side effects of the standard dithering algorithms. Another novelty of the new method is its vector character: Previous applications of error diffusion have treated the individual color components of an image separately. Here, we develop a general vector approach and demonstrate through simulation studies that superior results are achieved.Item Open Access Bulanıklık tespiti birikimli olasılığına dayalı kızılötesi kamera otomatik odaklanması(IEEE, 2014-04) Çakır, Serdar; Çetin, A. EnisNesne iz ölçümü ve analizinde kızılötesi (KÖ) kameralar önemli bir rol oynamaktadır. Özellikle araştırma ve askeri amaçlı kullanılan bilimsel KÖ kameralarda odaklama el ile yapılmakta ve bu durum alınan ölçümün hassasiyet ve güvenilirliğini azaltmaktadır. Otomatik kamera odaklama algoritmaları imgeden çeşitli öznitelikler çıkararak en iyi odak noktası için bir ölçüt belirlemeye çalışmaktadır. Bu çalışmada, imge kalite değerlendirilmesinde kullanılan dayanaksız (referanssız) bir bulanıklık ölçütü bir takım uyarlamalardan geçirilmekte ve uyarlanan bu ölçüt KÖ kamera otomatik odaklanması problemi için önerilmektedir. Gerçekştirilen deneysel çalışmalar önerilen yöntemin KÖ kamera otomatik odaklanması probleminde başarıyla kullanılabileceğini göstermiştir.Item Open Access Estimation of depth fields suitable for video compression based on 3-D structure and motion of objects(Institute of Electrical and Electronics Engineers, 1998-06) Alatan, A. A.; Onural, L.Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In three-dimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based E-matrix method. The estimation of the correspondences - two-dimensional (2-D) motion field - between the frames and segmentation of the scene into objects are achieved simultaneously by minimizing a Gibbs energy. The depth field is estimated by jointly minimizing a defined distortion and bitrate criterion using the 3-D motion parameters. The resulting depth field is efficient in the rate-distortion sense. Bit-rate values corresponding to the lossless encoding of the resultant depth fields are obtained using predictive coding; prediction errors are encoded by a Lempel-Ziv algorithm. The results are satisfactory for real-life video scenes.Item Open Access Impact of scalability in video transmission in promotion-capable differentiated services networks(IEEE, 2002-09) Gürses, E.; Akar, G. B.; Akar, NailTransmission of high quality video over the Internet faces many challenges including unpredictable packet loss characteristics of the current Internet and the heterogeneity of receivers in terms of their bandwidth and processing capabilities. To address these challanges, we propose an architecture in this paper that is based on the temporally scalable and error resilient video coding mode of the H.263+ codec. In this architecture, the video frames will be transported over a new generation IP network that supports differentiated services (Diffserv). We also propose a novel Two Rate Three Color Promotion-Capable Marker (trTCPCM) to be used at the edge of the diffserv network. Our simulation study demonstrates that an average of 30 dB can be achieved in case of highly congested links.Item Open Access Joint estimation and optimum encoding of depth field for 3-D object-based video coding(IEEE, 1996-09) Alatan, A. Aydın; Onural, Levent3-D motion models can be used to remove temporal redundancy between image frames. For efficient encoding using 3-D motion information, apart from the 3-D motion parameters, a dense depth field must also be encoded to achieve 2-D motion compensation on the image plane. Inspiring from Rate-Distortion Theory, a novel method is proposed to optimally encode the dense depth fields of the moving objects in the scene. Using two intensity frames and 3-D motion parameters as inputs, an encoded depth field can be obtained by jointly minimizing a distortion criteria and a bit-rate measure. Since the method gives directly an encoded field as an output, it does not require an estimate of the field to be encoded. By efficiently encoding the depth field during the experiments, it is shown that the 3-D motion models can be used in object-based video compression algorithms.Item Open Access LMS based adaptive prediction for scalable video coding(IEEE, 2006-05) Töreyin, B. Uğur; Trocan, M.; Pesquet-Popescu, B.; Çetin, A. Enis3D video codecs have attracted recently a lot of attention, due to their compression performance comparable with that of state-of-art hybrid codecs and due to their scalability features. In this work, we propose a least mean square (LMS) based adaptive prediction for the temporal prediction step in lifting implementation. This approach improves the overall quality of the coded video, by reducing both the blocking and ghosting artefacts. Experimental results show that the video quality as well as PSNR values are greatly improved with the proposed adaptive method, especially for video sequences with large contrast between the moving objects and the background and for sequences with illumination variations. © 2006 IEEE.Item Open Access Magnetic resonance electrical impedance tomography (MREIT) based on the solution of the convection equation using FEM with stabilization(Institute of Physics Publishing, 2012-07-27) Oran, O. F.; Ider, Y. Z.Most algorithms for magnetic resonance electrical impedance tomography (MREIT) concentrate on reconstructing the internal conductivity distribution of a conductive object from the Laplacian of only one component of the magnetic flux density (∇ 2B z) generated by the internal current distribution. In this study, a new algorithm is proposed to solve this ∇ 2B z-based MREIT problem which is mathematically formulated as the steady-state scalar pure convection equation. Numerical methods developed for the solution of the more general convectiondiffusion equation are utilized. It is known that the solution of the pure convection equation is numerically unstable if sharp variations of the field variable (in this case conductivity) exist or if there are inconsistent boundary conditions. Various stabilization techniques, based on introducing artificial diffusion, are developed to handle such cases and in this study the streamline upwind Petrov-Galerkin (SUPG) stabilization method is incorporated into the Galerkin weighted residual finite element method (FEM) to numerically solve the MREIT problem. The proposed algorithm is tested with simulated and also experimental data from phantoms. Successful conductivity reconstructions are obtained by solving the related convection equation using the Galerkin weighted residual FEM when there are no sharp variations in the actual conductivity distribution. However, when there is noise in the magnetic flux density data or when there are sharp variations in conductivity, it is found that SUPG stabilization is beneficial.Item Open Access Near-lossless image compression techniques(S P I E - International Society for Optical Engineering, 1998) Ansari, R.; Memon, N.; Ceran, E.Predictive and multiresolution techniques for near- lossless image compression based on the criterion of maximum allowable deviation of pixel values are investigated. A procedure for near-lossless compression using a modification of lossless predictive coding techniques to satisfy the specified tolerance is described. Simulation results with modified versions of two of the best lossless predictive coding techniques known, CALIC and JPEG-LS, are provided. Application of lossless coding based on reversible transforms in conjunction with prequantization is shown to be inferior to predictive techniques for near-lossless compression. A partial embedding two-layer scheme is proposed in which an embedded multiresolution coder generates a lossy base layer, and a simple but effective context-based lossless coder codes the difference between the original image and the lossy reconstruction. Results show that this lossy plus near-lossless technique yields compression ratios close to those obtained with predictive techniques, while providing the feature of a partially embedded bit-stream. © 1998 SPIE and IS&T.Item Open Access Polyphase adaptive filter banks for fingerprint image compression(The Institution of Engineering and Technology, 1998-10-01) Gerek, Ö. N.; Çetin, A. EnisA perfect reconstruction polyphase filter bank structure is presented in which the filters adapt to the changing input conditions. The use of such a filter bank leads to higher compression results for images containing sharp edges such as fingerprint images.Item Open Access Profile-encoding reconstruction for multiple-acquisition balanced steady-state free precession imaging(John Wiley and Sons Inc., 2017) Ilicak, Efe; Senel, Lutfi Kerem; Biyik, Erdem; Çukur, TolgaPurpose: The scan-efficiency in multiple-acquisition balanced steady-state free precession imaging can be maintained by accelerating and reconstructing each phase-cycled acquisition individually, but this strategy ignores correlated structural information among acquisitions. Here, an improved acceleration framework is proposed that jointly processes undersampled data across N phase cycles. Methods: Phase-cycled imaging is cast as a profile-encoding problem, modeling each image as an artifact-free image multiplied with a distinct balanced steady-state free precession profile. A profile-encoding reconstruction (PE-SSFP) is employed to recover missing data by enforcing joint sparsity and total-variation penalties across phase cycles. PE-SSFP is compared with individual compressed-sensing and parallel-imaging (ESPIRiT) reconstructions. Results: In the brain and the knee, PE-SSFP yields improved image quality compared to individual compressed-sensing and other tested methods particularly for higher N values. On average, PE-SSFP improves peak SNR by 3.8 ± 3.0 dB (mean ± s.e. across N = 2–8) and structural similarity by 1.4 ± 1.2% over individual compressed-sensing, and peak SNR by 5.6 ± 0.7 dB and structural similarity by 7.1 ± 0.5% over ESPIRiT. Conclusion: PE-SSFP attains improved image quality and preservation of high-spatial-frequency information at high acceleration factors, compared to conventional reconstructions. PE-SSFP is a promising technique for scan-efficient balanced steady-state free precession imaging with improved reliability against field inhomogeneity. Magn Reson Med 78:1316–1329, 2017.Item Open Access QR-RLS algorithm for error diffusion of color images(SPIE, 2000) Unal, G. B.; Yardimci, Y.; Arıkan, Orhan; Çetin, A. EnisPrinting color images on color printers and displaying them on computer monitors requires a significant reduction of physically distinct colors, which causes degradation in image quality. An efficient method to improve the display quality of a quantized image is error diffusion, which works by distributing the previous quantization errors to neighboring pixels, exploiting the eye's averaging of colors in the neighborhood of the point of interest. This creates the illusion of more colors. A new error diffusion method is presented in which the adaptive recursive least-squares (RLS) algorithm is used. This algorithm provides local optimization of the error diffusion filter along with smoothing of the filter coefficients in a neighborhood. To improve the performance, a diagonal scan is used in processing the image.Item Open Access Reduction of effects of inactive array elements in phase aberration correction(IEEE, 1993) Karaman, Mustafa; Köymen, Hayrettin; Atalar, Abdullah; O'Donnell, M.Phase aberration correction based on time delay estimation via minimization of sum of absolute difference (SAD) between radio frequency (RF) signals of neighboring elements is studied in the presence of missing elements. To examine the influence of inactive elements, phase estimation error is measured for various combinations of different number of missing elements, aberration level, and SNR. The measurements are performed on an experimental RF data set. Aberration delays of missing elements are interpolated using the phase estimate between the nearest active elements. The B-scan images are reconstructed for qualitative examination.Item Open Access Resolution enhancement of low resolution wavefields with POCS algorithm(The Institution of Engineering and Technology, 2003) Çetin, A. Enis; Özaktaş, H.; Özaktaş, Haldun M.The problem of enhancing the resolution of wavefield or beam profile measurements obtained using low resolution sensors is addressed by solving the problem of interpolating signals from partial fractional Fourier transform information in several domains. The iterative interpolation algorithm employed is based on the method of projections onto convex sets (POCS).Item Open Access Robust transmission of multi-view video streams using flexible macroblock ordering and systematic LT codes(IEEE, 2007) Argyropoulos, S.; Tan, A. Serdar; Thomos, N.; Arıkan, Erdal; Strintzis, M. G.The transmission of fully compatible H.264/AVC multi-view video coded streams over packet erasure networks is examined. Macroblock classification into unequally important slice groups is considered using the Flexible Macroblock Ordering (FMO) tool of H.264/AVC Systematic LT codes are used for error protection due to their low complexity and advanced performance. The optimal slice grouping and channel rate allocation are jointly determined by an iterative optimization algorithm based on dynamic programming. The experimental evaluation clearly demonstrates the validity of the proposed method.Item Open Access Scalable image quality assessment with 2D mel-cepstrum and machine learning approach(Elsevier, 2011-07-19) Narwaria, M.; Lin, W.; Çetin, A. EnisMeasurement of image quality is of fundamental importance to numerous image and video processing applications. Objective image quality assessment (IQA) is a two-stage process comprising of the following: (a) extraction of important information and discarding the redundant one, (b) pooling the detected features using appropriate weights. These two stages are not easy to tackle due to the complex nature of the human visual system (HVS). In this paper, we first investigate image features based on two-dimensional (2D) mel-cepstrum for the purpose of IQA. It is shown that these features are effective since they can represent the structural information, which is crucial for IQA. Moreover, they are also beneficial in a reduced-reference scenario where only partial reference image information is used for quality assessment. We address the second issue by exploiting machine learning. In our opinion, the well established methodology of machine learning/pattern recognition has not been adequately used for IQA so far; we believe that it will be an effective tool for feature pooling since the required weights/parameters can be determined in a more convincing way via training with the ground truth obtained according to subjective scores. This helps to overcome the limitations of the existing pooling methods, which tend to be over simplistic and lack theoretical justification. Therefore, we propose a new metric by formulating IQA as a pattern recognition problem. Extensive experiments conducted using six publicly available image databases (totally 3211 images with diverse distortions) and one video database (with 78 video sequences) demonstrate the effectiveness and efficiency of the proposed metric, in comparison with seven relevant existing metrics.