De som köpt den här boken har ofta också köpt The Anxious Generation av Jonathan Haidt (häftad).
Köp båda 2 för 1221 krThe exposition of the material is always very clear. The book is written confidently, with the greatest expertise and a remarkable breadth of topics. It should be an excellent resource for a broad audience of applied mathematicians. I would recommended it to all students, engineers and researchers in applied mathematics who wish to learn something about modern parallel techniques for large-scale matrix computation. Kudos to the authors on having produced such a delightful read and a much-needed reference! (Bruno Carpentieri, Mathematical Reviews, 2017) The goal of this book is to provide basic principles for the design of such efficient parallel algorithms for dense and sparse matrices. The book is intended to be adequate for researchers as well as for advanced graduates. (Gudula Rnger, zbMATH 1341.65011, 2016) This book covers parallel algorithmsfor a wide range of matrix computation problems, ranging from solving systems of linear equations to computing pseudospectra of matrices. This is a valuable reference book for researchers and practitioners in parallel computing. It includes up-to-date and comprehensive lists of references for various topics. this book is well written and accurate. I highly recommend it to the parallel computing community . (Sanzheng Qiao, Computing Reviews, November, 2015)
Efstratios Gallopoulos, University of Patras, Patras Greece Bernard Philippe, INRIA/IRISA, Rennes Cedex, France Ahmed H. Sameh, Purdue University, West Lafayette, IN, USA
List of Figures.- List of Tables.- List of Algorithms.- Notations used in the book.- Part I Basics.- Parallel Programming Paradigms.- Computational Models.- Principles of parallel programming.- Fundamental kernels.- Vector operations.- Higher level BLAS.- General organization for dense matrix factorizations.- Sparse matrix computations.- Part II Dense and special matrix computations.- Recurrences and triangular systems.- Definitions and examples.- Linear recurrences.- Implementations for a given number of processors.- Nonlinear recurrences.- General linear systems.- Gaussian elimination.- Pair wise pivoting.- Block LU factorization.- Remarks.- Banded linear systems.- LUbased schemes with partial pivoting.- The Spike family of algorithms.- The Spike balance scheme.- A tearing based banded solver.- Tridiagonal systems.- Special linear systems.- Vandermonde solvers.- Banded Toeplitz linear systems solvers.- Symmetric and Anti symmetric Decomposition (SAS).- Rapid elliptic solvers.- Orthogonal factorization and linear least squares problems.- Definitions.- QR factorization via Givens rotations.- QR factorization via Householder reductions.- Gram Schmidt orthogonalization.- Normal equations vs. orthogonal reductions.- Hybrid algorithms when m>n.- Orthogonal factorization of block angular matrices.- Rank deficient linear least squares problems.- The symmetric eigenvalue and singular value problems.- The Jacobi algorithms.- Tridiagonalization based schemes.- Bidiagonalization via Householder reduction.- Part III Sparse matrix computations.- Iterative schemes for large linear systems.- An example.- Classical splitting methods.- Polynomial methods.- Preconditioners.- A tearing based solver for generalized banded preconditioners.- Row projection methods for large non symmetric linear systems.- Multiplicative Schwarz preconditioner with GMRES.- Large symmetric eigenvalue problems.- Computing dominant eigenpairs and spectral transformations.- The Lanczos method.- A block Lanczos approach for solving symmetric perturbed standard eigenvalue problems.- The Davidson methods.- The trace minimization method for the symmetric generalized eigenvalue problem.- The sparse singular value problem.- Part IV Matrix functions and characteristics.- Matrix functions and the determinant.- Matrix functions.- Determinants.- Computing the matrix pseudospectrum.- Grid based methods.- Dimensionality reduction on the domain: Methods based on path following.- Dimensionality reduction on the matrix: Methods based on projection.- Notes.- References.