Browsing by Subject "Hypergraph partitioning"
Now showing 1 - 20 of 53
- Results Per Page
- Sort Options
Item Open Access Addressing volume and latency overheads in 1d-parallel sparse matrix-vector multiplication(Springer, 2017-08-09) Acer, Seher; Selvitopi, Oğuz; Aykanat, CevdetThe scalability of sparse matrix-vector multiplication (SpMV) on distributed memory systems depends on multiple factors that involve different communication cost metrics. The irregular sparsity pattern of the coefficient matrix manifests itself as high bandwidth (total and/or maximum volume) and/or high latency (total and/or maximum message count) overhead. In this work, we propose a hypergraph partitioning model which combines two earlier models for one-dimensional partitioning, one addressing total and maximum volume, and the other one addressing total volume and total message count. Our model relies on the recursive bipartitioning paradigm and simultaneously addresses three cost metrics in a single partitioning phase in order to reduce volume and latency overheads. We demonstrate the validity of our model on a large dataset that contains more than 300 matrices. The results indicate that compared to the earlier models, our model significantly improves the scalability of SpMV. © 2017, Springer International Publishing AG.Item Open Access Balance preserving min-cut replication set for a K-way hypergraph partitioning(2010) Yazıcı, VolkanReplication is a widely used technique in information retrieval and database systems for providing fault-tolerance and reducing parallelization and processing costs. Combinatorial models based on hypergraph partitioning are proposed for various problems arising in information retrieval and database systems. We consider the possibility of using vertex replication to improve the quality of hypergraph partitioning. In this study, we focus on the Balance Preserving Min-Cut Replication Set (BPMCRS) problem, where we are initially given a maximum replication capacity and a K-way hypergraph partition with an initial imbalance ratio. The objective in the BPMCRS problem is finding optimal vertex replication sets for each part of the given partition such that the initial cutsize of the partition is improved as much as possible and the initial imbalance is either preserved or reduced under the given replication capacity constraint. In order to address the BPMCRS problem, we propose a model based on a unique blend of coarsening and integer linear programming (ILP) schemes. This coarsening algorithm is based on the Dulmage-Mendelsohn decomposition. Experiments show that the ILP formulation coupled with the Dulmage-Mendelsohn decomposition-based coarsening provides high quality results in feasible execution times for reducing the cost of a given K-way hypergraph partition.Item Open Access Cache locality exploiting methods and models for sparse matrix-vector multiplication(2009) Akbudak, KadirThe sparse matrix-vector multiplication (SpMxV) is an important kernel operation widely used in linear solvers. The same sparse matrix is multiplied by a dense vector repeatedly in these solvers to solve a system of linear equations. High performance gains can be obtained if we can take the advantage of today’s deep cache hierarchy in SpMxV operations. Matrices with irregular sparsity patterns make it difficult to utilize data locality effectively in SpMxV computations. Different techniques are proposed in the literature to utilize cache hierarchy effectively via exploiting data locality during SpMxV. In this work, we investigate two distinct frameworks for cacheaware/oblivious SpMxV: single matrix-vector multiply and multiple submatrix-vector multiplies. For the single matrix-vector multiply framework, we propose a cache-size aware top-down row/column-reordering approach based on 1D sparse matrix partitioning by utilizing the recently proposed appropriate hypergraph models of sparse matrices, and a cache oblivious bottom-up approach based on hierarchical clustering of rows/columns with similar sparsity patterns. We also propose a column compression scheme as a preprocessing step which makes these two approaches cache-line-size aware. The multiple submatrix-vector multiplies framework depends on the partitioning the matrix into multiple nonzero-disjoint submatrices. For an effective matrixto-submatrix partitioning required in this framework, we propose a cache-size aware top-down approach based on 2D sparse matrix partitioning by utilizing the recently proposed fine-grain hypergraph model. For this framework, we also propose a traveling salesman formulation for an effective ordering of individual submatrix-vector multiply operations. We evaluate the validity of our models and methods on a wide range of sparse matrices. Experimental results show that proposed methods and models outperforms state-of-the-art schemes.Item Open Access Cartesian partitioning models for 2D and 3D parallel SpGEMM algorithms(IEEE, 2020) Demirci, Gündüz Vehbi; Aykanat, CevdetThe focus is distributed-memory parallelization of sparse-general-matrix-multiplication (SpGEMM). Parallel SpGEMM algorithms are classified under one-dimensional (1D), 2D, and 3D categories denoting the number of dimensions by which the 3D sparse workcube representing the iteration space of SpGEMM is partitioned. Recently proposed successful 2D- and 3D-parallel SpGEMM algorithms benefit from upper bounds on communication overheads enforced by 2D and 3D cartesian partitioning of the workcube on 2D and 3D virtual processor grids, respectively. However, these methods are based on random cartesian partitioning and do not utilize sparsity patterns of SpGEMM instances for reducing the communication overheads. We propose hypergraph models for 2D and 3D cartesian partitioning of the workcube for further reducing the communication overheads of these 2D- and 3D- parallel SpGEMM algorithms. The proposed models utilize two- and three-phase partitioning that exploit multi-constraint hypergraph partitioning formulations. Extensive experimentation performed on 20 SpGEMM instances by using upto 900 processors demonstrate that proposed partitioning models significantly improve the scalability of 2D and 3D algorithms. For example, in 2D-parallel SpGEMM algorithm on 900 processors, the proposed partitioning model respectively achieves 85 and 42 percent decrease in total volume and total number of messages, leading to 1.63 times higher speedup compared to random partitioning, on average.Item Open Access Clustering spatial networks for aggregate query processing: a hypergraph approach(Elsevier Ltd, 2008-03) Demir, E.; Aykanat, Cevdet; Cambazoglu, B. B.In spatial networks, clustering adjacent data to disk pages is highly likely to reduce the number of disk page accesses made by the aggregate network operations during query processing. For this purpose, different techniques based on the clustering graph model are proposed in the literature. In this work, we show that the state-of-the-art clustering graph model is not able to correctly capture the disk access costs of aggregate network operations. Moreover, we propose a novel clustering hypergraph model that correctly captures the disk access costs of these operations. The proposed model aims to minimize the total number of disk page accesses in aggregate network operations. Based on this model, we further propose two adaptive recursive bipartitioning schemes to reduce the number of allocated disk pages while trying to minimize the number of disk page accesses. We evaluate our clustering hypergraph model and recursive bipartitioning schemes on a wide range of road network datasets. The results of the conducted experiments show that the proposed model is quite effective in reducing the number of disk accesses incurred by the network operations. © 2007 Elsevier B.V. All rights reserved.Item Open Access Decomposing linear programs for parallel solution(Springer, 1995-08) Pınar, Ali; Çatalyürek Ümit V.; Aykanat, Cevdet; Pınar, MustafaCoarse grain parallelism inherent in the solution of Linear Programming (LP) problems with block angular constraint matrices has been exploited in recent research works. However, these approaches suffer from unscalability and load imbalance since they exploit only the existing block angular structure of the LP constraint matrix. In this paper, we consider decomposing LP constraint matrices to obtain block angular structures with specified number of blocks for scalable parallelization. We propose hypergraph models to represent LP constraint matrices for decomposition. In these models, the decomposition problem reduces to the well-known hypergraph partitioning problem. A Kernighan-Lin based multiway hypergraph partitioning heuristic is implemented for experimenting with the performance of the proposed hypergraph models on the decomposition of the LP problems selected from NETLIB suite. Initial results are promising and justify further research on other hypergraph partitioning heuristics for decomposing large LP problems. © Springer-Verlag Berlin Heidelberg 1996.Item Open Access An effective model to decompose linear programs for parallel solution(Springer, 1996-08) Pınar, Ali; Aykanat, CevdetAlthough inherent parallelism in the solution of block angulax Linear Programming (LP) problems has been exploited in many research works, the literature that addresses decomposing constraint matrices into block angular form for parallel solution is very rare and recent. We have previously proposed hypergraph models, which reduced the problem to the hypergraph partitioning problem. However, the quality of the results reported were limited due to the hypergraph partitioning tools we have used. Very recently, multilevel graph partitioning heuristics have been proposed leading to very successful graph partitioning tools; Chaco and Metis. In this paper, we propose an effective graph model to decompose matrices into block angular form, which reduces the problem to the well-known graph partitioning by vertex separator problem. We have experimented the validity of our proposed model with various LP problems selected from NETLIB and other sources. The results are very attractive both in terms of solution quality and running times. © Springer-Verlag Berlin Heidelberg 1996.Item Open Access Encapsulating multiple communication-cost metrics in partitioning sparse rectangular matrices for parallel matrix-vector multiplies(SIAM, 2004) Uçar, B.; Aykanat, CevdetThis paper addresses the problem of one-dimensional partitioning of structurally unsymmetric square and rectangular sparse matrices for parallel matrix-vector and matrix-transpose-vector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational loads of processors. Most of the existing partitioning models consider only the total message volume hoping that minimizing this communication-cost metric is likely to reduce other metrics. However, the total message latency (start-up time) may be more important than the total message volume. Furthermore, the maximum message volume and latency handled by a single processor are also important metrics. We propose a two-phase approach that encapsulates all these four communication-cost metrics. The objective in the first phase is to minimize the total message volume while maintaining the computational-load balance. The objective in the second phase is to encapsulate the remaining three communication-cost metrics. We propose communication-hypergraph and partitioning models for the second phase. We then present several methods for partitioning communication hypergraphs. Experiments on a wide range of test matrices show that the proposed approach yields very effective partitioning results. A parallel implementation on a PC cluster verifies that the theoretical improvements shown by partitioning results hold in practice.Item Open Access Exploiting locality in sparse matrix-matrix multiplication on many-core rchitectures(IEEE Computer Society, 2017) Akbudak K.; Aykanat, CevdetExploiting spatial and temporal localities is investigated for efficient row-by-row parallelization of general sparse matrix-matrix multiplication (SpGEMM) operation of the form C=A,B on many-core architectures. Hypergraph and bipartite graph models are proposed for 1D rowwise partitioning of matrix A to evenly partition the work across threads with the objective of reducing the number of B-matrix words to be transferred from the memory and between different caches. A hypergraph model is proposed for B-matrix column reordering to exploit spatial locality in accessing entries of thread-private temporary arrays, which are used to accumulate results for C-matrix rows. A similarity graph model is proposed for B-matrix row reordering to increase temporal reuse of these accumulation array entries. The proposed models and methods are tested on a wide range of sparse matrices from real applications and the experiments were carried on a 60-core Intel Xeon Phi processor, as well as a two-socket Xeon processor. Results show the validity of the models and methods proposed for enhancing the locality in parallel SpGEMM operations. © 1990-2012 IEEE.Item Open Access Exploiting replicated data for communication load balancing in image-space parallel direct volume rendering of unstructured grids(2009) Okuyan, ErkanThe focus of this work is on parallel volume rendering applications in which renderings with different parameters are successively repeated over the same dataset. The only reason for inter-task interaction is the existence of data primitives that are inputs to several tasks. Both computational structure and expected task execution times may change during successive rendering instances. Change in computational structure means change in the data primitive requirements of tasks. Since the individual processors of a parallel system have a limited storage capacity, we can reserve a limited amount of storage for holding replicas at each processor. For the parallelization of a particular rendering instance, the remapping model should utilize the replication pattern of the previous rendering instance(s) for reducing the communication overhead due to the data replication requirement of the current rendering instance. We propose a two-phase model for solving this problem. The hypergraphpartitioning-based model proposed for the first phase aims to minimize the total message volume that will be incurred due to the replication/migration of input data while maintaining balance on computational and receive-volume loads of processors. The network-flow-based model proposed for the second phase aims to minimize the maximum message volume handled by processors via utilizing the flexibility in assigning send-communication tasks to processors, which is introduced by data replication. The validity of our proposed model is verified on image-space parallelization of a direct volume rendering algorithm.Item Open Access Graph and hypergraph partitioning(1993) Daşdan, AliGraph and hypergraph partitioning have many important applications in various areas such as VLSI layout, mapping, and graph theory. For graph and hypergraph partitioning, there are very successful heuristics mainly based on Kernighan-Lin’s minimization technique. We propose two novel approaches for multiple-way graph and hypergraph partitioning. The proposed algorithms drastically outperform the best multiple-way partitioning algorithm both on randomly generated graph instances and on benchmark circuits. The proposed algorithms convey all the advantages of the algorithms based on KernighanLin’s minimization technique such as their robustness. However, they do not convey many disadvantages of those algorithms such as their poor performance on sparse test cases. The proposed algorithms introduce very interesting ideas that are also applicable to the existing algorithms without very much effort.Item Open Access Hypergraph based declustering for multi-disk databases(2000) Koyutürk, MehmetIn very large distributed database systems, the data is declustered in order to exploit parallelism while processing a query. Declustering refers to allocating the data into multiple disks in such a way that the tuples belonging to a relation are distributed evenly across disks. There are many declustering strategies proposed in the literature, however these strategies are domain specific or have deficiencies. We propose a model that exactly fits the problem and show that iterative improvement schemes can capture detailed per-relation basis declustering objective. We provide a two phase iterative improvement based algorithm and appropriate gain functions for these algorithms. The experimental results show that the proposed algorithm provides a significant performance improvement compared to the state-of-the-art graph-partitioning based declustering strategy.Item Open Access Hypergraph models for parallel sparse matrix-matrix multiplication(2015-09) Akbudak, KadirMultiplication of two sparse matrices (i.e., sparse matrix-matrix multiplication, which is abbreviated as SpGEMM) is a widely used kernel in many applications such as molecular dynamics simulations, graph operations, and linear programming. We identify parallel formulations of SpGEMM operation in the form of C = AB for distributed-memory architectures. Using these formulations, we propose parallel SpGEMM algorithms that have the multiplication and communication phases: The multiplication phase consists of local SpGEMM computations without any communication and the communication phase consists of transferring required input/output matrices. For these algorithms, three hypergraph models are proposed. These models are used to partition input and output matrices simultaneously. The input matrices A and B are partitioned in one dimension in all of these hypergraph models. The output matrix C is partitioned in two dimensions, which is nonzero-based in the rst hypergraph model, and it is partitioned in one dimension in the second and third models. In partitioning of these hypergraph models, the constraint on vertex weights corresponds to computational load balancing among processors for the multiplication phase of the proposed SpGEMM algorithms, and the objective, which is minimizing cutsize de ned in terms of costs of the cut hyperedges, corresponds to minimizing the communication volume due to transferring required matrix entries in the communication phase of the SpGEMM algorithms. We also propose models for reducing the total number of messages while maintaining balance on communication volumes handled by processors during the communication phase of the SpGEMM algorithms. An SpGEMM library for distributed memory architectures is developed in order to verify the empirical validity of our models. The library uses MPI (Message Passing Interface) for performing communication in the parallel setting. The developed SpGEMM library is run on SpGEMM instances from various realistic applications and the experiments are carried out on a large parallel IBM BlueGene/Q system, named JUQUEEN. In the experimentation of the proposed hypergraph models, high speedup values are observed.Item Open Access Hypergraph models for sparse matrix partitioning and reordering(1999-11) Çatalyürek, Ümit VeyselGraphs have been widely used to represent sparse matrices for various scientific applications including one-dimensional (ID) decomposition of sparse matrices for parallel sparse-matrix vector multiplication (SpMxV) and sparse matrix reordering for low fill factorization. The standard graph-partitioning based ID decomposition of sparse matrices does not reflect the actual communication volume requirement for parallel SpMxV. We propose two computational hypergraph models which avoid this crucial deficiency of the graph model on ID decomposition. The proposed models reduce the ID decomposition problem to the well-known hypergraph partitioning problem. In the literature, there is a lack of 2D decomposition heuristic which directly minimizes the communication requirements for parallel SpMxV computations. Three novel hypergraph models are introduced for 2D decomposition of sparse matrices for minimizing the communication volume requirement. The first hypergraph model is proposed for fine-grain 2D decomposition of the sparse matrices for parallel SpMxV. The second hypergraph model for 2D decomposition is proposed to produce jagged-like decomposition of the sparse matrix. The checkerboard decomposition based parallel matrix-vector multiplication algorithms are widely encountered in the literature. However, only the load balancing problem is addressed in those works. Here, we propose a new hypergraph model which aims the minimization of communication volume while maintaining the load balance among the processors for checkerboard decomposition, as the third model for 2D decomposition. The proposed model reduces the decomposition problem to the multi-constraint hypergraph partitioning problem. The notion of multi-constraint partitioning has recently become popular in graph partitioning. We applied the multi-constraint partitioning to the hypergraph partitioning problem for solving checkerboard partitioning. Graph partitioning by vertex separator (GPVS) is widely used for nested dissection based low fill ordering of sparse matrices for direct solution of linear systems. In this work, we also show that the GPVS problem can be formulated as hypergraph partitioning. We exploit this finding to develop a novel hypergraph partitioning-based nested dissection ordering. The recently proposed successful multilevel framework is exploited to develop a multilevel hypergraph partitioning tool PaToH for the experimental verification of our proposed hypergraph models. Experimental results on a wide range of realistic sparse test matrices confirm the validity of the proposed hypergraph models. In terms of communication volume, the proposed hypergraph models produce 30% and 59% better decompositions than the graph model in ID and 2D decompositions of sparse matrices for parallel SpMxV computations, respectively. The proposed hypergraph partitioning-based nested dissection produces 25% to 45% better orderings than the widely used multiple mimirnum degree ordering in the ordering of various test matrices arising from different applications.Item Open Access Hypergraph partitioning and reordering for parallel sparse triangular solves and tensor decomposition(2021-07) Torun, TuğbaSeveral scientific and real-world problems require computations with sparse ma-trices, or more generally, sparse tensors which are multi-dimensional arrays. For sparse matrix computations, parallelization of sparse triangular systems intro-duces significant challenges because of the sequential nature of the computations involved. One approach to parallelize sparse triangular systems is to use sparse triangular SPIKE (stSPIKE) algorithm, which was originally proposed for shared memory architectures. stSPIKE decouples the problem into independent smaller systems and requires the solution of a much smaller reduced sparse triangular sys-tem. We extend and implement stSPIKE for distributed-memory architectures. Then we propose distributed-memory parallel Gauss-Seidel (dmpGS) and ILU (dmpILU) algorithms by means of stSPIKE. Furthermore, we propose novel hy-pergraph partitioning models and in-block reordering methods for minimizing the size and nonzero count of the reduced systems that arise in dmpGS and dmpILU. For sparse tensor computations, tensor decomposition is widely used in the anal-ysis of multi-dimensional data. The canonical polyadic decomposition (CPD) is one of the most popular tensor decomposition methods, which is commonly computed by the CPD-ALS algorithm. Due to high computational and mem-ory demands of CPD-ALS, it is inevitable to use a distributed-memory-parallel algorithm for efficiency. The medium-grain CPD-ALS algorithm, which adopts multi-dimensional cartesian tensor partitioning, is one of the most successful dis-tributed CPD-ALS algorithms for sparse tensors. We propose a novel hypergraph partitioning model, CartHP, whose partitioning objective correctly encapsulates the minimization of total communication volume of multi-dimensional cartesian tensor partitioning. Extensive experiments on real-world sparse matrices and tensors validate the parallel scalability of the proposed algorithms as well as the effectiveness of the proposed hypergraph partitioning and reordering models.Item Open Access A hypergraph partitioning model for profile minimization(Society for Industrial and Applied Mathematics Publications, 2019) Acer, Seher; Kayaaslan, E.; Aykanat, CevdetIn this paper, the aim is to symmetrically permute the rows and columns of a given sparse symmetric matrix so that the profile of the permuted matrix is minimized. We formulate this permutation problem by first defining the m-way ordered hypergraph partitioning (moHP) problem and then showing the correspondence between profile minimization and moHP problems. For solving the moHP problem, we propose a recursive-bipartitioning-based hypergraph partitioning algorithm, which we refer to as the moHP algorithm. This algorithm achieves a linear part ordering via left-toright bipartitioning. In this algorithm, we utilize fixed vertices and two novel cut-net manipulation techniques in order to address the minimization objective of the moHP problem. We show the correctness of the moHP algorithm and describe how the existing partitioning tools can be utilized for its implementation. Experimental results on an extensive set of matrices show that the moHP algorithm obtains a smaller profile than the state-of-the-art profile reduction algorithms, which then results in considerable improvements in the factorization runtime in a direct solver.Item Open Access Hypergraph partitioning-based fill-reducing ordering for symmetric matrices(Society for Industrial and Applied Mathematics, 2011) Çatalyürek, Ü. V.; Aykanat, Cevdet; Kayaaslan, E.A typical first step of a direct solver for the lin ear system Mx = b is reordering of the symmetric matrix M to improve execution time and space requirements of the solution process. In this work, we propose a novel nested-dissection-based ordering approach that utilizes hypergraph partitioning. Our approach is based on the formulation of graph partitioning by vertex separator (GPVS) problem as a hypergraph partitioning problem. This new formulation is immune to deficiency of GPVS in a multilevel framework and hence enables better orderings. In matrix terms, our method relies on the existence of a structural factorization of the input M matrix in the form of M = AA T (or M = AD2AT). We show that the partitioning of the row-net hypergraph representation of the rectangular matrix A induces a GPVS of the standard graph representation of matrix M. In the absence of such factorization, we also propose simple, yet effective structural factorization techniques that are based on finding an edge clique cover of the standard graph representation of matrix M, and hence applicable to any arbitrary symmetric matrix M. Our experimental evaluation has shown that the proposed method achieves better ordering in comparison to state-of-the-art graph-based ordering tools even for symmetric matrices where structural M = AAT factorization is not provided as an input. For matrices coming from linear programming problems, our method enables even faster and better orderings. © 2011 Societ y for Industrial and Applied Mathematics.Item Open Access A hypergraph-partitioning based remapping model for image-space parallel volume rendering(2000) Cambazoğlu, Berkant BarlaRay-casting is a popular direct volume rendering technique, used to explore the content of 3D data. Although this technique is capable of producing high quality visualizations, its slowness prevents the interactive use. The major method to overcome this speed limitation is parallelization. In this work, we investigate the image-space parallelization of ray-casting for distributed memory architectures. The most important issues in image-space parallelization are load balancing and minimization of the data redistribution overhead introduced at successive visualization instances. Load balancing in volume rendering requires the estimation of screen work load correctly. For this purpose, we tested three different load assignment schemes. Since the data used in this work is made up of unstructured tetrahedral grids, clusters of data were used instead of cells, for efficiency purposes. Two different cluster-processor distribution schemes are employed to see the effects of initial data distribution. The major contribution of the thesis comes at the hypergraph partitioning model proposed as a solution to the remapping problem. For this purpose, existing hypergraph partitioning tool PaToH is modified and used as a one-phase remapping tool. The model is tested on a Parsytec CC system and satisfactory results are obtained. Compared to the two-phase jagged partitioning model, our work incurs less preprocessing overhead. At comparable load imbalance values, our hypergraph partitioning model requires 25% less total volume of communication than jagged partitioning on the average.Item Open Access Hypergraph-partitioning-based decomposition for parallel sparse-matrix vector multiplication(IEEE, 1999) Catalyurek, U.V.; Aykanat, CevdetIn this work, we show that the standard graph-partitioning-based decomposition of sparse matrices does not reflect the actual communication volume requirement for parallel matrix-vector multiplication. We propose two computational hypergraph models which avoid this crucial deficiency of the graph model. The proposed models reduce the decomposition problem to the well-known hypergraph partitioning problem. The recently proposed successful multilevel framework is exploited to develop a multilevel hypergraph partitioning tool PaToH for the experimental verification of our proposed hypergraph models. Experimental results on a wide range of realistic sparse test matrices confirm the validity of the proposed hypergraph models. In the decomposition of the test matrices, the hypergraph models using PaToH and hMeTiS result in up to 63 percent less communication volume (30 to 38 percent less on the average) than the graph model using MeTiS, while PaToH is only 1.3-2.3 times slower than MeTiS on the average.Item Open Access Hypergraph-partitioning-based remapping models for image-space-parallel direct volume rendering of unstructured grids(Institute of Electrical and Electronics Engineers, 2007-07) Cambazoglu, B. B.; Aykanat, CevdetIn this work, image-space-parallel direct volume rendering (DVR) of unstructured grids is investigated for distributed-memory architectures. A hypergraph-partitioning-based model is proposed for the adaptive screen partitioning problem in this context. The proposed model aims to balance the rendering loads of processors while trying to minimize the amount of data replication. In the parallel DVR framework we adopted, each data primitive is statically owned by its home processor, which is responsible from replicating its primitives on other processors. Two appropriate remapping models are proposed by enhancing the above model for use within this framework. These two remapping models aim to minimize the total volume of communication in data replication while balancing the rendering loads of processors. Based on the proposed models, a parallel DVR algorithm is developed. The experiments conducted on a PC cluster show that the proposed remapping models achieve better speedup values compared to the remapping models previously suggested for image-space-parallel DVR. © 2007 IEEE.
- «
- 1 (current)
- 2
- 3
- »