Browsing by Subject "Inner product"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Çarpmasız yapay sinir ağı(IEEE, 2015-05) Akbaş, Cem Emre; Bozkurt, Alican; Çetin, A. Enis; Çetin-Atalay, R.; Üner, A.Bu bildiride çarpma işlemi kullanmadan oluşturulan bir Yapay Sinir Ağı (YSA) sunulmaktadır. Girdi vektörleri ve YSA katsayılarının iç çarpımları çarpmasız bir vektör işlemiyle hesaplanmıştır. Yapay sinir ağının eğitimi sign-LMS algoritması ile yapılmıştır. Önerilen YSA sistemi, hesap gücü kısıtlı olan veya düşük enerji tüketimine ihtiyaç duyulan mikroişlemcilerde kullanılabilir.Item Open Access A multiplication-free framework for signal processing and applications in biomedical image analysis(IEEE, 2013) Suhre, A.; Keskin F.; Ersahin, T.; Cetin-Atalay, R.; Ansari, R.; Cetin, A.E.A new framework for signal processing is introduced based on a novel vector product definition that permits a multiplier-free implementation. First a new product of two real numbers is defined as the sum of their absolute values, with the sign determined by product of the hard-limited numbers. This new product of real numbers is used to define a similar product of vectors in RN. The new vector product of two identical vectors reduces to a scaled version of the l1 norm of the vector. The main advantage of this framework is that it yields multiplication-free computationally efficient algorithms for performing some important tasks in signal processing. An application to the problem of cancer cell line image classification is presented that uses the notion of a co-difference matrix that is analogous to a covariance matrix except that the vector products are based on our new proposed framework. Results show the effectiveness of this approach when the proposed co-difference matrix is compared with a covariance matrix. © 2013 IEEE.Item Open Access A novel method for scaling iterative solvers: avoiding latency overhead of parallel sparse-matrix vector multiplies(Institute of Electrical and Electronics Engineers, 2015) Selvitopi, R. O.; Ozdal, M. M.; Aykanat, CevdetIn parallel linear iterative solvers, sparse matrix vector multiplication (SpMxV) incurs irregular point-to-point (P2P) communications, whereas inner product computations incur regular collective communications. These P2P communications cause an additional synchronization point with relatively high message latency costs due to small message sizes. In these solvers, each SpMxV is usually followed by an inner product computation that involves the output vector of SpMxV. Here, we exploit this property to propose a novel parallelization method that avoids the latency costs and synchronization overhead of P2P communications. Our method involves a computational and a communication rearrangement scheme. The computational rearrangement provides an alternative method for forming input vector of SpMxV and allows P2P and collective communications to be performed in a single phase. The communication rearrangement realizes this opportunity by embedding P2P communications into global collective communication operations. The proposed method grants a certain value on the maximum number of messages communicated regardless of the sparsity pattern of the matrix. The downside, however, is the increased message volume and the negligible redundant computation. We favor reducing the message latency costs at the expense of increasing message volume. Yet, we propose two iterative-improvement-based heuristics to alleviate the increase in the volume through one-to-one task-to-processor mapping. Our experiments on two supercomputers, Cray XE6 and IBM BlueGene/Q, up to 2,048 processors show that the proposed parallelization method exhibits superior scalable performance compared to the conventional parallelization method.