Browsing by Subject "Regularization"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Open Access Automated parameter selection for accelerated mri reconstruction via low-rank modeling of local k-space neighborhoods(Elsevier GmbH, 2022-02-01) Ilıcak, Efe; Sarıtaş, Emine Ülkü; Çukur, TolgaPurpose: Image quality in accelerated MRI rests on careful selection of various reconstruction parameters. A common yet tedious and error-prone practice is to hand-tune each parameter to attain visually appealing reconstructions. Here, we propose a parameter tuning strategy to automate hybrid parallel imaging (PI) – compressed sensing (CS) reconstructions via low-rank modeling of local k-space neighborhoods (LORAKS) supplemented with sparsity regularization in wavelet and total variation (TV) domains. Methods: For low-rank regularization, we leverage a soft-thresholding operation based on singular values for matrix rank selection in LORAKS. For sparsity regularization, we employ Stein's unbiased risk estimate criterion to select the wavelet regularization parameter and local standard deviation of reconstructions to select the TV regularization parameter. Comprehensive demonstrations are presented on a numerical brain phantom and in vivo brain and knee acquisitions. Quantitative assessments are performed via PSNR, SSIM and NMSE metrics. Results: The proposed hybrid PI-CS method improves reconstruction quality compared to PI-only techniques, and it achieves on par image quality to reconstructions with brute-force optimization of reconstruction parameters. These results are prominent across several different datasets and the range of examined acceleration rates. Conclusion: A data-driven parameter tuning strategy to automate hybrid PI-CS reconstructions is presented. The proposed method achieves reliable reconstructions of accelerated multi-coil MRI datasets without the need for exhaustive hand-tuning of reconstruction parameters. © 2022Item Open Access Kernel ridge regression model for sediment transport in open channel flow(Springer, 2021-01-11) Safari, M. J. S.; Arashloo, Shervin RahimzadehSediment transport modeling is of primary importance for the determination of channel design velocity in lined channels. This study proposes to model sediment transport in open channel flow using kernel ridge regression (KRR), a nonlinear regression technique formulated in the reproducing kernel Hilbert space. While the naïve kernel regression approach provides high flexibility for modeling purposes, the regularized variant is equipped with an additional mechanism for better generalization capability. In order to better tailor the KRR approach to the sediment transport modeling problem, unlike the conventional KRR approach, in this study the kernel parameter is directly learned from the data via a new gradient descent-based learning mechanism. Moreover, for model construction, a procedure based on Cholesky decomposition and forward-back substitution is applied to improve the computational complexity of the approach. Evaluation of the recommended technique is performed utilizing a large number of laboratory experimental data where the examination of the proposed approach in terms of three statistical performance indices for sediment transport modeling indicates a better performance for the developed model in particle Froude number computation, outperforming the conventional models as well as some other machine learning techniques.Item Open Access The method of regularization and its application to some EM problems(Springer, 2000) Altıntaş, Ayhan; Nosich, A. I.; Uzunoğlu, N. K.; Nikita, K. S.; Kaklamani, D. I.The regularization of the integral equations for the solution of electromagnetic problems is discussed. The technique includes a semi-analytic inversion of the integral operator resulting in equation of the Fredholm second kind, which can be solved using numerical inversion. The procedure is employed through Riemann-Hilbert Problem technique for the electromagnetic problems that can be put into a dual-series equation form. An example of the method is described for the E-wave scattering from a cavity-backed aperture.Item Open Access Minimizers of sparsity regularized huber loss function(Springer, 2020) Akkaya, Deniz; Pınar, Mustafa Ç.We investigate the structure of the local and global minimizers of the Huber loss function regularized with a sparsity inducing L0 norm term. We characterize local minimizers and establish conditions that are necessary and sufficient for a local minimizer to be strict. A necessary condition is established for global minimizers, as well as non-emptiness of the set of global minimizers. The sparsity of minimizers is also studied by giving bounds on a regularization parameter controlling sparsity. Results are illustrated in numerical examples.Item Open Access Minimizers of sparsity regularized robust loss functions(Bilkent University, 2021-06) Akkaya, DenizWe study the structure of the local and global minimizers of the Huber loss and the sum of absolute deviations functions regularized with a sparsity penalty L0 norm term. We char-acterize local minimizers for both loss functions, and establish conditions that are necessary and sufficient for local minimizers to be strict. A necessary condition is established for global minimizers, as well as non-emptiness of the set of global minimizers. The sparsity of minimizers is also studied by giving bounds on a regularization parameter controlling sparsity. Results are illustrated in numerical examples.Item Open Access Open-set object recognition(Bilkent University, 2022-07) Mohammad, SalmanDespite significant advances in object recognition and classification over the past couple of decades, there are various situations where collecting representative training samples from all classes in real-world scenarios is quite expensive, or the system may be exposed to unpredictable novel samples at the test time. The pattern classification problem is commonly referred to as an open-set recognition task in such cases where limited and incomplete knowledge of the entire data distribution is provided to the model during the training time. During test phase, unknown classes can be faced which requires the classifier to accurately classify the previously seen classes while effectively rejecting unseen ones. Among others, one-class classification serves as a plausible solution to the open-set recognition problem. Nevertheless, current one-class classifiers have their limitations. Classical kernel-based approaches require carefully designed features to obtain reasonable performance but rest on a solid basis in statistical learning theory, providing good robustness against training set impurities. More recent deep learning-based methods, on the other hand, focus on learning relevant features directly from the data but typically rely on ad hoc one-class loss functions, which very often do not generalize well and are not robust against the omnipresent noise and contamination in the training set. In this thesis, we introduce a novel approach which leverages the advantages of both kernel-based and deep-learning approaches by bringing the two learning formalisms under a common umbrella. In particular, the proposed method learns deep convolutional features to optimize a kernel Fisher null-space loss subject to a Tikhonov regularisation on the discriminant in the Hilbert space. As such, it can be trained in a deep end-to-end fashion while being robust against training set contamination. Through extensive experiments conducted on different image datasets in various evaluation settings, the proposed approach is shown to be quite robust and more effective than the current state-of-the-art methods for anomaly detection in the scenario where the training set is corrupted and contains noisy samples. At the same time, the proposed approaches can be effectively utilized in an unsupervised scenario to rank the data points based on their conformity with the majority of samples.Item Open Access Regularized motion estimation techniques and their applications to video coding(Bilkent University, 1996) Kıranyaz, SerkanNovel regularized motion estimation techniques and their possible applications to video coding are presented. A block matching motion estimation algorithm which extracts better block motion field by forming and ininimizing a suitable energy function is introduced. Based on ciri ¿idciptive structure onto block sizes, cui cidvcinced block matching ¿ilgorithm is presented. The block sizes are adaptively ¿idjusted according to the motion. Blockwise coarse to fine segmentation based motion estimation algorithm is introduced for further reduction on the number of bits that are spent lor the coding of the block motion vectors. Motion estiiricition algorithms which can be used lor ¿iverage motion determination and artificial frame generation by fractional motion compensation are ¿ilso developed. Finallj^, an alternative motion estimation cind compensation technique which defines feciture based motion vectors on the ob ject boundciries and reconstructs the decoded frame from the interpolation of the compensated object boundaries is presented. All the algorithms developed in this thesis are simulated on recil or synthetic images cind their performance is demonstrcited.Item Open Access Robust one-class kernel spectral regression(IEEE, 2021-03) Arashloo, Shervin Rahimzadeh; Kittler, J.The kernel null-space technique is known to be an effective one-class classification (OCC) technique. Nevertheless, the applicability of this method is limited due to its susceptibility to possible training data corruption and the inability to rank training observations according to their conformity with the model. This article addresses these shortcomings by regularizing the solution of the null-space kernel Fisher methodology in the context of its regression-based formulation. In this respect, first, the effect of the Tikhonov regularization in the Hilbert space is analyzed, where the one-class learning problem in the presence of contamination in the training set is posed as a sensitivity analysis problem. Next, the effect of the sparsity of the solution is studied. For both alternative regularization schemes, iterative algorithms are proposed which recursively update label confidences. Through extensive experiments, the proposed methodology is found to enhance robustness against contamination in the training set compared with the baseline kernel null-space method, as well as other existing approaches in the OCC paradigm, while providing the functionality to rank training samples effectively.Item Open Access Subset based error recovery(Elsevier BV, 2021-10-12) Ekmekcioğlu, Ömer; Akkaya, Deniz; Pınar, Mustafa ÇelebiWe propose a data denoising method using Extreme Learning Machine (ELM) structure which allows us to use Johnson-Lindenstrauß Lemma (JL) for preserving Restricted Isometry Property (RIP) in order to give theoretical guarantees for recovery. Furthermore, we show that the method is equivalent to a robust two-layer ELM that implicitly benefits from the proposed denoising algorithm. Current robust ELM methods in the literature involve well-studied L1, L2 regularization techniques as well as the usage of the robust loss functions such as Huber Loss. We extend the recent analysis on the Robust Regression literature to be effectively used in more general, non-linear settings and to be compatible with any ML algorithm such as Neural Networks (NN). These methods are useful under the scenario where the observations suffer from the effect of heavy noise. We extend the usage of ELM as a general data denoising method independent of the ML algorithm. Tests for denoising and regularized ELM methods are conducted on both synthetic and real data. Our method performs better than its competitors for most of the scenarios, and successfully eliminates most of the noise.Item Open Access Targeted vessel reconstruction in non-contrast-enhanced steady-state free precession angiography(John Wiley and Sons Ltd, 2016) Ilicak, E.; Cetin S.; Bulut E.; Oguz, K. K.; Saritas, E. U.; Unal, G.; Çukur, T.Image quality in non-contrast-enhanced (NCE) angiograms is often limited by scan time constraints. An effective solution is to undersample angiographic acquisitions and to recover vessel images with penalized reconstructions. However, conventional methods leverage penalty terms with uniform spatial weighting, which typically yield insufficient suppression of aliasing interference and suboptimal blood/background contrast. Here we propose a two-stage strategy where a tractographic segmentation is employed to auto-extract vasculature maps from undersampled data. These maps are then used to incur spatially adaptive sparsity penalties on vascular and background regions. In vivo steady-state free precession angiograms were acquired in the hand, lower leg and foot. Compared with regular non-adaptive compressed sensing (CS) reconstructions (CSlow), the proposed strategy improves blood/background contrast by 71.3±28.9% in the hand (mean±s.d. across acceleration factors 1-8), 30.6±11.3% in the lower leg and 28.1±7.0% in the foot (signed-rank test, P< 0.05 at each acceleration). The proposed targeted reconstruction can relax trade-offs between image contrast, resolution and scan efficiency without compromising vessel depiction.Item Open Access Use of dropouts and sparsity for regularization of autoencoders in deep neural networks(Bilkent University, 2015-01) Ali, Muhaddisa BaratDeep learning has emerged as an e ective pre-training technique for neural networks with many hidden layers. To overcome the over- tting issue, usually large capacity models are used. In this thesis, two methodologies which are frequently utilized in deep neural network literature have been considered. Firstly, for pretraining the performance of sparse autoencoder has been improved by adding p-norm of the sparse penalty term to an over-complete case. This e ciently induces sparsity to the hidden layers of a deep network to overcome over- tting issues. At the end of the training, features constructed for each layer end up with a variety of useful information to initialize a deep network. The accuracy obtained is comparable to the conventional sparse autoencoder technique. Secondly, the large capacity networks su er from complex co-adaptations between the hidden layers by combining the predictions of each unit in the previous layer to generate the features of the next layer. This results to certain redundant features. So, the idea we propose is to induce a threshold level on the hidden activations to allow only the highest active units to participate in the reconstruction of the features and suppressing the e ect of less active units in the optimization. This is implemented by dropping out k-lowest hidden units while retaining the rest. Our simulations con rm the hypothesis that the k-lowest dropouts help the optimization in both the pre-training and ne-tuning phases giving rise to the internal distributed representations for better generalization. Moreover, this model gives quick convergence than the conventional dropout method. In classi cation task on MNIST dataset, the proposed idea gives the comparable results with the previous regularization techniques such as denoising autoencoders, use of recti er linear units combined with standard regularizations. The deep networks constructed from the combination of our models achieve favorably the similar state of the art results obtained by dropout idea with less time complexity making them well suited to large problem sizes.