Programming massively parallel processors : a hands-on approach / David B. Kirk and Wen-Mei W. Hwu.
Contributor(s): Hwu, Wen-mei [author.]Material type: TextPublisher: Amsterdam ; Boston : Elsevier/Morgan Kaufmann, Copyright date: ©2013Edition: Second editionDescription: 1 online resource (xx, 496 pages) : illustrations (some color)Content type: text Media type: computer Carrier type: online resourceISBN: 9780123914187; 0123914183Subject(s): Multiprocessors | Parallel processing (Electronic computers) | Computer architecture | Parallel programming (Computer science) | COMPUTERS -- Systems Architecture -- Distributed Systems & Computing | Computer architecture | Multiprocessors | Parallel processing (Electronic computers) | Parallel programming (Computer science)Genre/Form: Electronic books. Additional physical formats: Print version:: Programming massively parallel processors.DDC classification: 004.35 | 004/.35 LOC classification: QA76.642 | .K57 2013ebOnline resources: Click here to access online
|Item type||Current location||Collection||Call number||Status||Date due||Barcode||Item holds|
Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. This best-selling guide to CUDA and GPU parallel programming has been revised with more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. With these improvements, the book retains its concise, intuitive, practical approach based on years of road-testing in the authors' own parallel computing courses.
Includes bibliographical references and index.
History of GPU computing -- Introduction to data parallelism and CUDA C -- Data-parallel execution model -- CUDA memories -- Performance considerations -- Floating-point considerations -- Parallel patterns : convolution, with an introduction to constant memory and caches -- Parallel patterns : prefix sum, an introduction to work efficiency in parallel algorithms -- Parallel patterns : sparse matrix-vector multiplication, an introduction to compaction and regularization in parallel algorithms -- Application case study : advanced MRI reconstruction -- Application case study : molecular visualization and analysis -- Parallel programming and computational thinking -- An introduction to OpenCL (TM) -- Parallel programming with OpenACC -- Thrust : a productivity-oriented library for CUDA -- CUDA FORTRAN -- An introduction to C++ AMP -- Programming a heterogeneous computing cluster -- CUDA dynamic parallelism.
Print version record.