Synopses & Reviews
If you have a working knowledge of Haskell, this hands-on book shows you how to use the languages many APIs and frameworks for writing both parallel and concurrent programs. Youll learn how parallelism exploits multicore processors to speed up computation-heavy programs, and how concurrency enables you to write programs with threads for multiple interactions.
Author Simon Marlow walks you through the process with lots of code examples that you can run, experiment with, and extend. Divided into separate sections on Parallel and Concurrent Haskell, this book also includes exercises to help you become familiar with the concepts presented:
- Express parallelism in Haskell with the Eval monad and Evaluation Strategies
- Parallelize ordinary Haskell code with the Par monad
- Build parallel array-based computations, using the Repa library
- Use the Accelerate library to run computations directly on the GPU
- Work with basic interfaces for writing concurrent code
- Build trees of threads for larger and more complex programs
- Learn how to build high-speed concurrent network servers
- Write distributed programs that run on multiple machines in a network
This book covers the breadth of Haskell's diverse selection of programming APIs for concurrent and parallel programming. It is split into two parts. The first part, on parallel programming, covers the techniques for using multiple processors to speed up CPU-intensive computations, including methods for using parallelism in both idiomatic Haskell and numerical array-based algorithms, and for running computations on a GPU. The second part, on concurrent programming, covers techniques for using multiple threads, including overlapping multiple I/O operations, building concurrent network servers, and distributed programming across multiple machines.
About the Author
Simon Marlow has been a prominent figure in the Haskell community formany years. He is the author of large parts of the Glasgow HaskellCompiler, including in particular its highly regarded mulitcoreruntime system, along with many of the libraries and tools thatHaskell programmers take for granted. Simon also contributes to thefunctional programming research community, and has a string of paperson subjects ranging from garbage collection to language design. Inrecent years Simon's focus has been on making Haskell an idealprogramming language for parallel and concurrent applications, both bydeveloping new programming models and building a high-qualityimplementation.
Simon spent 14 years at Microsoft's Research laborotory in Cambridge,before taking a break in Spring 2013 to work on this book. Hecurrently works at Facebook UK.
Table of Contents
Preface; Audience; How to Read This Book; Conventions Used in This Book; Using Sample Code; Safari® Books Online; How to Contact Us; Acknowledgments; Chapter 1: Introduction; 1.1 Terminology: Parallelism and Concurrency; 1.2 Tools and Resources; 1.3 Sample Code; Parallel Haskell; Chapter 2: Basic Parallelism: The Eval Monad; 2.1 Lazy Evaluation and Weak Head Normal Form; 2.2 The Eval Monad, rpar, and rseq; 2.3 Example: Parallelizing a Sudoku Solver; 2.4 Deepseq; Chapter 3: Evaluation Strategies; 3.1 Parameterized Strategies; 3.2 A Strategy for Evaluating a List in Parallel; 3.3 Example: The K-Means Problem; 3.4 GC'd Sparks and Speculative Parallelism; 3.5 Parallelizing Lazy Streams with parBuffer; 3.6 Chunking Strategies; 3.7 The Identity Property; Chapter 4: Dataflow Parallelism: The Par Monad; 4.1 Example: Shortest Paths in a Graph; 4.2 Pipeline Parallelism; 4.3 Example: A Conference Timetable; 4.4 Example: A Parallel Type Inferencer; 4.5 Using Different Schedulers; 4.6 The Par Monad Compared to Strategies; Chapter 5: Data Parallel Programming with Repa; 5.1 Arrays, Shapes, and Indices; 5.2 Operations on Arrays; 5.3 Example: Computing Shortest Paths; 5.4 Folding and Shape-Polymorphism; 5.5 Example: Image Rotation; 5.6 Summary; Chapter 6: GPU Programming with Accelerate; 6.1 Overview; 6.2 Arrays and Indices; 6.3 Running a Simple Accelerate Computation; 6.4 Scalar Arrays; 6.5 Indexing Arrays; 6.6 Creating Arrays Inside Acc; 6.7 Zipping Two Arrays; 6.8 Constants; 6.9 Example: Shortest Paths; 6.10 Example: A Mandelbrot Set Generator; Concurrent Haskell; Chapter 7: Basic Concurrency: Threads and MVars; 7.1 A Simple Example: Reminders; 7.2 Communication: MVars; 7.3 MVar as a Simple Channel: A Logging Service; 7.4 MVar as a Container for Shared State; 7.5 MVar as a Building Block: Unbounded Channels; 7.6 Fairness; Chapter 8: Overlapping Input/Output; 8.1 Exceptions in Haskell; 8.2 Error Handling with Async; 8.3 Merging; Chapter 9: Cancellation and Timeouts; 9.1 Asynchronous Exceptions; 9.2 Masking Asynchronous Exceptions; 9.3 The bracket Operation; 9.4 Asynchronous Exception Safety for Channels; 9.5 Timeouts; 9.6 Catching Asynchronous Exceptions; 9.7 mask and forkIO; 9.8 Asynchronous Exceptions: Discussion; Chapter 10: Software Transactional Memory; 10.1 Running Example: Managing Windows; 10.2 Blocking; 10.3 Blocking Until Something Changes; 10.4 Merging with STM; 10.5 Async Revisited; 10.6 Implementing Channels with STM; 10.7 An Alternative Channel Implementation; 10.8 Bounded Channels; 10.9 What Can We Not Do with STM?; 10.10 Performance; 10.11 Summary; Chapter 11: Higher-Level Concurrency Abstractions; 11.1 Avoiding Thread Leakage; 11.2 Symmetric Concurrency Combinators; 11.3 Adding a Functor Instance; 11.4 Summary: The Async API; Chapter 12: Concurrent Network Servers; 12.1 A Trivial Server; 12.2 Extending the Simple Server with State; 12.3 A Chat Server; Chapter 13: Parallel Programming Using Threads; 13.1 How to Achieve Parallelism with Concurrency; 13.2 Example: Searching for Files; Chapter 14: Distributed Programming; 14.1 The Distributed-Process Family of Packages; 14.2 Distributed Concurrency or Parallelism?; 14.3 A First Example: Pings; 14.4 Multi-Node Ping; 14.5 Typed Channels; 14.6 Handling Failure; 14.7 A Distributed Chat Server; 14.8 Exercise: A Distributed Key-Value Store; Chapter 15: Debugging, Tuning, and Interfacing with Foreign Code; 15.1 Debugging Concurrent Programs; 15.2 Tuning Concurrent (and Parallel) Programs; 15.3 Concurrency and the Foreign Function Interface; Colophon;