Browsing by Subject "Block iterative methods"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Analyzing large sparse Markov chains of Kronecker products(IEEE, 2009) Dayar, TuğrulKronecker products are used to define the underlying Markov chain (MC) in various modeling formalisms, including compositional Markovian models, hierarchical Markovian models, and stochastic process algebras. The motivation behind using a Kronecker structured representation rather than a flat one is to alleviate the storage requirements associated with the MC. With this approach, systems that are an order of magnitude larger can be analyzed on the same platform. In the Kronecker based approach, the generator matrix underlying the MC is represented using Kronecker products [6] of smaller matrices and is never explicitly generated. The implementation of transient and steady-state solvers rests on this compact Kronecker representation, thanks to the existence of an efficient vector-Kronecker product multiplication algorithm known as the shuffle algorithm [6]. The transient distribution can be computed through uniformization using vector-Kronecker product multiplications. The steady-state distribution also needs to be computed using vector-Kronecker product multiplications, since direct methods based on complete factorizations, such as Gaussian elimination, normally introduce new nonzeros which cannot be accommodated. The two papers [2], [10] provide good overviews of iterative solution techniques for the analysis of MCs based on Kronecker products. Issues related to reachability analysis, vector-Kronecker product multiplication, hierarchical state space generation in Kronecker based matrix representations for large Markov models are surveyed in [5]. Throughout our discussion, we make the assumption that the MC at hand does not have unreachable states, meaning it is irreducible. And we take an algebraic view [7] to discuss recent results related to the analysis of MCs based on Kronecker products independently from modeling formalisms. We provide background material on the Kronecker representation of the generator matrix underlying a CTMC, show that it has a rich structure which is nested and recursive, and introduce a small CTMC whose generator matrix is expressed as a sum of Kronecker products; this CTMC is used as a running example throughout the discussion. We also consider preprocessing of the Kronecker representation so as to expedite numerical analysis. We discuss permuting the nonzero structure of the underlying CTMC symmetrically by reordering, changing the orders of the nested blocks by grouping, and reducing the size of the state space by lumping. The steady-state analysis of CTMCs based on Kronecker products is discussed for block iterative methods, multilevel methods, and preconditioned projection methods, respectively. The results can be extended to DTMCs based on Kronecker products with minor modifications. Areas that need further research are mentioned as they are discussed. Our contribution to this area over the years corresponds to work along iterative methods based on splittings and their block versions [11], associated preconditioners to be used with projection methods [4], near complete decomposability [8], a method based on iterative disaggregation for a class of lumpable MCs [9], a class of multilevel methods [3], and a recent method based on decomposition for weakly interacting subsystems [1]. © 2009 IEEE.Item Open Access Experiments with two-stage iterative solvers and preconditioned Krylov subspace methods on nearly completely decomposable Markov chains(1997) Gueaieb, WailPreconditioned Krylov subspace methods are state-of-the-art iterative solvers developed mostly in the last fifteen years that may be used, among other things, to solve for the stationary distribution of Markov chains. Assuming Markov chains of interest are irreducible, the ¡problem amounts to computing a positive solution vector to a homogeneous system of linear algebraic equations with a singular coefficient matrix under a normalization constraint. That is, the (n X 1) unknown stationary vector x in Ax = 0, ||a:||^ = 1 (0.1 ) is sought. Here A = I — , an n x n singular M-matrix, and P is the one-step stochastic transition probability matrix. Albeit the recent advances, practicing performance analysts still widely prefer iterative methods based on splittings when they want to compare the performance of newly devised algorithms against existing ones, or when they need candidate solvers to evaluate the performance of a system model at hand. In fact, experimental results with Krylov subspace methods on Markov chains, especially the ill-conditioned nearly completely decomposable (NCD) ones, are few. We believe there is room for research in this area siDecifically to help us understand the effect of the degree of coupling of NCD Markov chains and their nonzero structure on the convergence characteristics and space requirements of preconditioned Krylov subspace methods. The work of several researchers have raised important and interesting questions that led to research in another, yet related direction. These questions are the following: “How must one go about partitioning the global coefficient matrix A in equation (0.1) into blocks if the system is NCD and a two-stage iterative solver (such as block successive overrelaxation— SOR) is to be employed? Are block partitionings dictated by the NCD normal form of F necessarily superior to others? Is it worth investing alternative partitionings? Better yet, for a fixed labelling and partitioning of the states, how does the performance of block SOR (or even that of point SOR) compare to the performance of the iterative aggregation-disaggregation (lAD) algorithm? Finally, is there any merit in using two-stage iterative solvers when preconditioned Krylov subspace methods are available?” Experimental results show that in most of the test cases two-stage iterative solvers are superior to Krylov subspace methods with the chosen preconditioners, on NCD Markov chains. For two-stage iterative solvers, there are cases in which a straightforward partitioning of the coefficient matrix gives a faster solution than can be obtained using the NCD normal form.Item Open Access Steady-state analysis of Google-like stochastic matrices(2007) Noyan, Gökçe NilMany search engines use a two-step process to retrieve from the web pages related to a user’s query. In the first step, traditional text processing is performed to find all pages matching the given query terms. Due to the massive size of the web, this step can result in thousands of retrieved pages. In the second step, many search engines sort the list of retrieved pages according to some ranking criterion to make it manageable for the user. One popular way to create this ranking is to exploit additional information inherent in the web due to its hyperlink structure. One successful and well publicized link-based ranking system is PageRank, the ranking system used by the Google search engine. The dynamically changing matrices reflecting the hyperlink structure of the web and used by Google in ranking pages are not only very large, but they are also sparse, reducible, stochastic matrices with some zero rows. Ranking pages amounts to solving for the steady-state vectors of linear combinations of these matrices with appropriately chosen rank-1 matrices. The most suitable method of choice for this task appears to be the power method. Certain improvements have been obtained using techniques such as quadratic extrapolation and iterative aggregation. In this thesis, we propose iterative methods based on various block partitionings, including those with triangular diagonal blocks obtained using cutsets, for the computation of the steady-state vector of such stochastic matrices. The proposed iterative methods together with power and quadratically extrapolated power methods are coded into a software tool. Experimental results on benchmark matrices show that it is possible to recommend Gauss-Seidel for easier web problems and block Gauss-Seidel with partitionings based on a block upper triangular form in the remaining problems, although it takes about twice as much memory as quadratically extrapolated power method.Item Open Access Steady-state analysis of google-like stochastic matrices with block iterative methods(Kent State University, 2011) Dayar, T.; Noyan, G. N.A Google-like matrix is a positive stochastic matrix given by a convex combination of a sparse, nonnegative matrix and a particular rank one matrix. Google itself uses the steady-state vector of a large matrix of this form to help order web pages in a search engine. We investigate the computation of the steady-state vectors of such matrices using block iterative methods. The block partitionings considered include those based on block triangular form and those having triangular diagonal blocks obtained using cutsets. Numerical results show that block Gauss-Seidel with partitionings based on block triangular form is most often the best approach. However, there are cases in which a block partitioning with triangular diagonal blocks is better, and the Gauss-Seidel method is usually competitive. Copyright © 2011, Kent State University.