Browsing by Subject "compressive sensing"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Image restoration and reconstruction using projections onto epigraph set of convex cost fuchtions(2015) Tofighi, MohammadThis thesis focuses on image restoration and reconstruction problems. These inverse problems are solved using a convex optimization algorithm based on orthogonal Projections onto the Epigraph Set of a Convex Cost functions (PESC). In order to solve the convex minimization problem, the dimension of the problem is lifted by one and then using the epigraph concept the feasibility sets corresponding to the cost function are defined. Since the cost function is a convex function in R N , the corresponding epigraph set is also a convex set in R N+1. The convex optimization algorithm starts with an arbitrary initial estimate in R N+1 and at each step of the iterative algorithm, an orthogonal projection is performed onto one of the constraint sets associated with the cost function in a sequential manner. The PESC algorithm provides globally optimal solutions for different functions such as total variation, `1-norm, `2-norm, and entropic cost functions. Denoising, deconvolution and compressive sensing are among the applications of PESC algorithm. The Projection onto Epigraph Set of Total Variation function (PES-TV) is used in 2-D applications and for 1-D applications Projection onto Epigraph Set of `1-norm cost function (PES-`1) is utilized. In PES-`1 algorithm, first the observation signal is decomposed using wavelet or pyramidal decomposition. Both wavelet denoising and denoising methods using the concept of sparsity are based on soft-thresholding. In sparsity-based denoising methods, it is assumed that the original signal is sparse in some transform domain such as Fourier, DCT, and/or wavelet domain and transform domain coefficients of the noisy signal are soft-thresholded to reduce noise. Here, the relationship between the standard soft-thresholding based denoising methods and sparsity-based wavelet denoising methods is described. A deterministic soft-threshold estimation method using the epigraph set of `1-norm cost function is presented. It is demonstrated that the size of the `1-ball can be determined using linear algebra. The size of the `1-ball in turn determines the soft-threshold. The PESC, PES-TV and PES-`1 algorithms, are described in detail in this thesis. Extensive simulation results are presented. PESC based inverse restoration and reconstruction algorithm is compared to the state of the art methods in the literature.Item Open Access A novel compression algorithm based on sparse sampling of 3-D laser range scans(2010) Dobrucalı, Oğuzcan3-D models of environments can be very useful and are commonly employed in areas such as robotics, art and architecture, environmental planning and documentation. A 3-D model is typically comprised of a large number of measurements. When 3-D models of environments need to be transmitted or stored, they should be compressed efficiently to use the capacity of the communication channel or the storage medium effectively. In this thesis, we propose a novel compression technique based on compressive sampling, applied to sparse representations of 3-D laser range measurements. The main issue here is finding highly sparse representations of the range measurements, since they do not have such representations in common domains, such as the frequency domain. To solve this problem, we develop a new algorithm to generate sparse innovations between consecutive range measurements acquired while the sensor moves. We compare the sparsity of our innovations with others generated by estimation and filtering. Furthermore, we compare the compression performance of our lossy compression method with widely used lossless and lossy compression techniques. The proposed method offers small compression ratio and provides a reasonable compromise between reconstruction error and processing time.Item Open Access Signal representation and recovery under measurement constraints(2012) Özçelikkale Hünerli, AyçaWe are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both in the average, and also in terms of guarantees that hold with high probability, as a function of system parameters. Our investigation also reveals a possible relationship between the concept of coherence of random fields as defined in optics, and the concept of coherence of bases as defined in compressive sensing, through the fractional Fourier transform. We also consider an extension of our discussions to stationary Gaussian sources. We find explicit expressions for the mean-square error for equidistant sampling, and comment on the decay of error introduced by using finite-length representations instead of infinite-length representations.