Synopses & Reviews
If you're looking to take full advantage of multi-core processors with concurrent programming, this practical book provides the knowledge and hands-on experience you need. The Art of Concurrency is one of the few resources to focus on implementing algorithms in the shared-memory model of multi-core processors, rather than just theoretical models or distributed-memory architectures. The book provides detailed explanations and usable samples to help you transform algorithms from serial to parallel code, along with advice and analysis for avoiding mistakes that programmers typically make when first attempting these computations.
Written by an Intel engineer with over two decades of parallel and concurrent programming experience, this book will help you:
- Understand parallelism and concurrency
- Explore differences between programming for shared-memory and distributed-memory
- Learn guidelines for designing multithreaded applications, including testing and tuning
- Discover how to make best use of different threading libraries, including Windows threads, POSIX threads, OpenMP, and Intel Threading Building Blocks
- Explore how to implement concurrent algorithms that involve sorting, searching, graphs, and other practical computations
The Art of Concurrency shows you how to keep algorithms scalable to take advantage of new processors with even more cores. For developing parallel code algorithms for concurrent programming, this book is a must.
Synopsis
This practical book gives programmers exactly what they need to develop applications that support concurrency--the execution of several tasks simultaneously.
About the Author
Clay Breshears has been with Intel since September 2000. He started as a Senior Parallel Application Engineer at the Intel Parallel Applications Center in Champaign, IL, implementing multithreaded and distributed solutions in customer applications. Clay is currently a Course Architect for the Intel Software College, specializing in multi-core and multithreaded programming and training. Before joining Intel, Clay was a Research Scientist at Rice University helping Department of Defense researchers make best use ofthe latest High Performance Computing (HPC) platforms and resources.
Clay received his Ph.D. in Computer Science from the University of Tennessee, Knoxville, in 1996, but has been involved with parallel computation and programming for over twenty years; six of those years were spent in academia at Eastern Washington University and The University of Southern Mississippi.
Table of Contents
Preface; Why Should You Read This Book?; Who Is This Book For?; What's in This Book?; Conventions Used in This Book; Using Code Examples; Comments and Questions; Safari® Books Online; Acknowledgments; Chapter 1: Want to Go Faster? Raise Your Hands if You Want to Go Faster!; 1.1 Some Questions You May Have; 1.2 Four Steps of a Threading Methodology; 1.3 Background of Parallel Algorithms; 1.4 Shared-Memory Programming Versus Distributed-Memory Programming; 1.5 This Book's Approach to Concurrent Programming; Chapter 2: Concurrent or Not Concurrent?; 2.1 Design Models for Concurrent Algorithms; 2.2 What's Not Parallel; Chapter 3: Proving Correctness and Measuring Performance; 3.1 Verification of Parallel Algorithms; 3.2 Example: The Critical Section Problem; 3.3 Performance Metrics (How Am I Doing?); 3.4 Review of the Evolution for Supporting Parallelism in Hardware; Chapter 4: Eight Simple Rules for Designing Multithreaded Applications; 4.1 Rule 1: Identify Truly Independent Computations; 4.2 Rule 2: Implement Concurrency at the Highest Level Possible; 4.3 Rule 3: Plan Early for Scalability to Take Advantage of Increasing Numbers of Cores; 4.4 Rule 4: Make Use of Thread-Safe Libraries Wherever Possible; 4.5 Rule 5: Use the Right Threading Model; 4.6 Rule 6: Never Assume a Particular Order of Execution; 4.7 Rule 7: Use Thread-Local Storage Whenever Possible or Associate Locks to Specific Data; 4.8 Rule 8: Dare to Change the Algorithm for a Better Chance of Concurrency; 4.9 Summary; Chapter 5: Threading Libraries; 5.1 Implicit Threading; 5.2 Explicit Threading; 5.3 What Else Is Out There?; 5.4 Domain-Specific Libraries; Chapter 6: Parallel Sum and Prefix Scan; 6.1 Parallel Sum; 6.2 Prefix Scan; 6.3 Selection; 6.4 A Final Thought; Chapter 7: MapReduce; 7.1 Map As a Concurrent Operation; 7.2 Reduce As a Concurrent Operation; 7.3 Applying MapReduce; 7.4 MapReduce As Generic Concurrency; Chapter 8: Sorting; 8.1 Bubblesort; 8.2 Odd-Even Transposition Sort; 8.3 Shellsort; 8.4 Quicksort; 8.5 Radix Sort; Chapter 9: Searching; 9.1 Unsorted Sequence; 9.2 Binary Search; Chapter 10: Graph Algorithms; 10.1 Depth-First Search; 10.2 All-Pairs Shortest Path; 10.3 Minimum Spanning Tree; Chapter 11: Threading Tools; 11.1 Debuggers; 11.2 Performance Tools; 11.3 Anything Else Out There?; 11.4 Go Forth and Conquer; Glossary; Photo Credits; Colophon;