Fast and robust solution techniques for large scale linear least squares problems
Özaslan, İbrahim Kurban
Item Usage Stats
Momentum Iterative Hessian Sketch (M-IHS) techniques, a group of solvers for large scale linear Least Squares (LS) problems, are proposed and analyzed in detail. Proposed M-IHS techniques are obtained by incorporating the Heavy Ball Acceleration into the Iterative Hessian Sketch algorithm and they provide signiﬁcant improvements over the randomized preconditioning techniques. By using approximate solvers along with the iterations, the proposed techniques are capable of avoiding all matrix decompositions and inversions, which is one of the main advantages over the alternative solvers such as the Blendenpik and the LSRN. Similar to the Chebyshev Semi-iterations, the M-IHS variants do not use any inner products and eliminate the corresponding synchronization steps in hierarchical or distributed memory systems, yet the M-IHS converges faster than the Chebyshev Semi-iteration based solvers. Lower bounds on the required sketch size for various randomized distributions are established through the error analyses of the M-IHS variants. Unlike the previously proposed approaches to produce a solution approximation, the proposed M-IHS techniques can use sketch sizes that are proportional to the statistical dimension which is always smaller than the rank of the coeﬃcient matrix. Additionally, hybrid schemes are introduced to estimate the unknown ℓ2-norm regularization parameter along with the iterations of the M-IHS techniques. Unlike conventional hybrid methods, the proposed Hybrid M-IHS techniques estimate the regularization parameter from the lower dimensional sub-problems that are constructed by random projections rather than the deterministic projections onto the Krylov Subspaces. Since the lower dimensional sub-problems that arise during the iterations of the Hybrid M-IHS variants are close approximations to the Newton sub-systems and the accuracy of their solutions increase exponentially, the parameters estimated from them rapidly converge to a proper regularization parameter for the full problem. In various numerical experiments conducted at several noise levels, the Hybrid M-IHS variants consistently estimated better regularization parameters and constructed solutions with less errors than the direct methods in far fewer iterations than the conventional hybrid methods. In large scale applications where the coeﬃcient matrix is distributed over a memory array, the proposed Hybrid M-IHS variants provide improved eﬃciency by minimizing the number of distributed matrix-vector multiplications with the coeﬃcient matrix.
Oblivious subspace embeddings