Synopses & Reviews
This accessible text covers the techniques of parallel programming in a practical manner that enables readers to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively class-tested, it is the first text of its kind that does not require access to a special multiprocessor system, concentrating instead on parallel programs that can be executed on networked computers using freely available parallel software tools. The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers. Uses MPI pseudocodes to describe algorithms and allows different programming tools to be implemented, and provides readers with thorough coverage of shared memory programming, including Pthreads and OpenMP. Useful as a professional reference for programmers and system administrators.
This book provides various parallel programming approaches and analyses of their performances in detail. The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers. Useful as a professional reference for programmers and system administrators.
Table of Contents
I. BASIC TECHNIQUES. 1. Parallel Computers.
2. Message-Passing Computing.
3. Embarrassingly Parallel Computations.
4. Partitioning and Divide-and-Conquer Strategies.
5. Pipelined Computations.
6. Synchronous Computations.
7. Load Balancing and Termination Detection.
8. Programming with Shared Memory.
9. Distributed Shared Memory Systems and Programming.
II. ALGORITHMS AND APPLICATIONS. 10. Sorting Algorithms.
11. Numerical Algorithms.
12. Image Processing.
13. Searching and Optimization.
Appendix A: Basic MPI Routines.
Appendix B: Basic Pthread Routines.
Appendix C: OpenMP Directives, Library Functions, and Environment Variables