Leanpub Header

Skip to main content

Concurrency with Modern C++

What every professional C++ programmer should know about concurrency.

C++11 is the first C++ standard that deals with concurrency. The story goes on with C++17, C++20, and will continue with C++23.

I'll give you a detailed insight into the current and the upcoming concurrency in C++. This insight includes the theory and a lot of practice.

This book is available in multiple packages!

Pick Your Package
PDF
EPUB
WEB
1,240
Readers
728
Pages
About

About

About the Book

  • C++11 and C++14 have the basic building blocks for creating concurrent or parallel programs.
  • With C++17, we got the parallel algorithms of the Standard Template Library (STL). That means most of the algorithms of the STL can be executed sequentially, in parallel, or vectorized.
  • The concurrency story in C++ goes on. With C++20, we got coroutines, atomic smart pointers, semaphores, latches, and barriers.
  • C++23 supports the first concrete coroutine: std::generator.
  • With future C++ standards, we can hope for executors, extended futures, transactional memory, and more.

This book explains the details of concurrency in modern C++ and gives you nearly 200 running code examples. Therefore, you can combine theory with practice and get the most out of it.

Because this book is about concurrency, I present many pitfalls and show you how to overcome them.

The book is 100 % finished, but I will update it regularly. The next update is probably about C++26. Furthermore, I will write about lock-free concurrent data structure and patterns for parallelization.

Share this book

Categories

Installments completed

5 / 5

Packages

Pick Your Package

All packages include the ebook in the following formats: PDF, EPUB, and Web

The Book

Minimum price

Suggested price$41.00

$33.00

  • Source Code

Concurreny with Modern C++ Team Edition: Five Copies

Minimum price

Suggested price$123.00

Get five copies to the price of three. This package includes all code examples.

$99.00

  • Source Code

Author

About the Author

Rainer Grimm

I've worked as a software architect, team lead, and instructor since 1999. In 2002, I created a further education round at my company. I have given training courses since 2002. My first tutorials were about proprietary management software, but soon after, I began teaching Python and C++. In my spare time, I like to write articles about C++, Python, and Haskell. I also like to speak at conferences. I publish weekly on my English blog https://www.modernescpp.com.

Since 2016, I have been an independent instructor giving seminars about modern C++ and Python. I have published several books in various languages about modern C++ and in particular, concurrency. Due to my profession, I always search for the best way to teach modern C++.

My books "C++ 11 für Programmierer ", "C++" and "C++ Standardbibliothek kurz & gut" for the "kurz & gut" series were published by Pearson and O'Reilly. They are available in German, English, Korean, and Persian. In summer 2018 I published a new book on Leanpub: "Concurrency with Modern C++". This book is also available in German: "Modernes C++: Concurrency meistern".

Contents

Table of Contents

Reader Testimonials

Introduction

  1. Conventions
  2. Special Fonts
  3. Special Symbols
  4. Special Boxes
  5. Tip Headline
  6. Warning Headline
  7. Distilled Information
  8. Source Code
  9. Run the Programs
  10. How should you read the book?
  11. Personal Notes
  12. Acknowledgment
  13. About Me
  14. IA Quick Overview

1.Concurrency with Modern C++

  1. 1.1C++11 and C++14: The Foundation
  2. 1.1.1Memory Model
  3. 1.1.2Multithreading
  4. 1.2C++17: Parallel Algorithms of the Standard Template Library
  5. 1.2.1Execution Policy
  6. 1.2.2New Algorithms
  7. 1.3Coroutines
  8. 1.4Case Studies
  9. 1.4.1Calculating the Sum of a Vector
  10. 1.4.2The Dining Philosophers Problem by Andre Adrian
  11. 1.4.3Thread-Safe Initialization of a Singleton
  12. 1.4.4Ongoing Optimization with CppMem
  13. 1.4.5Fast Synchronization of Threads
  14. 1.5Variations of Futures
  15. 1.6Modification and Generalization of a Generator
  16. 1.7Various Job Workflows
  17. 1.8The Future of C++
  18. 1.8.1Executors
  19. 1.8.2Extended futures
  20. 1.8.3Transactional Memory
  21. 1.8.4Task Blocks
  22. 1.8.5Data-Parallel Vector Library
  23. 1.9Patterns and Best Practices
  24. 1.9.1Synchronization
  25. 1.9.2Concurrent Architecture
  26. 1.9.3Best Practices
  27. 1.10Data Structures
  28. 1.11Challenges
  29. 1.12Time Library
  30. 1.13CppMem
  31. 1.14Glossary
  32. IIThe Details

2.Memory Model

  1. 2.1Basics of the Memory Model
  2. 2.1.1What is a memory location?
  3. 2.1.2What happens if two threads access the same memory location?
  4. 2.2The Contract
  5. 2.2.1The Foundation
  6. 2.2.2The Challenges
  7. 2.3Atomics
  8. 2.3.1Strong versus Weak Memory Model
  9. 2.3.2The Atomic Flag
  10. Initialization of a std::atomic_flag in C++11
  11. 2.3.3std::atomic
  12. atomic is not volatile
  13. Push versus Pull Principle
  14. Check the type properties at compile time
  15. The Importance of being Thread-Safe
  16. The fetch_mult algorithm is lock_free
  17. 2.3.4All Atomic Operations
  18. 2.3.5Free Atomic Functions
  19. Atomic Smart Pointers with C++20
  20. 2.3.6std::atomic_ref (C++20)
  21. 2.4The Synchronization and Ordering Constraints
  22. 2.4.1The Six Variants of Memory Orderings in C++
  23. 2.4.2Sequential Consistency
  24. 2.4.3Acquire-Release Semantic
  25. The memory model for a deeper understanding of multithreading
  26. Release Sequence
  27. 2.4.4std::memory_order_consume
  28. 2.4.5Relaxed Semantics
  29. The add algorithm is wait-free
  30. 2.5Fences
  31. 2.5.1std::atomic_thread_fence
  32. Synchronization between the release fence and the acquire fence
  33. 2.5.2std::atomic_signal_fence
  34. Distilled Information

3.Multithreading

  1. 3.1The Basic Thread std::thread
  2. 3.1.1Thread Creation
  3. 3.1.2Thread Lifetime
  4. The Challenge of detach
  5. scoped_thread by Anthony Williams
  6. 3.1.3Thread Arguments
  7. Thread arguments by reference
  8. 3.1.4Member Functions
  9. Access to the system-specific implementation
  10. 3.2The Improved Thread std::jthread (C++20)
  11. 3.2.1Automatically Joining
  12. 3.2.2Cooperative Interruption of a std::jthread
  13. 3.3Shared Data
  14. 3.3.1Mutexes
  15. std::cout is thread-safe
  16. 3.3.2Locks
  17. 3.3.3std::lock
  18. Resolving the deadlock with a std::scoped_lock
  19. 3.3.4Thread-safe Initialization
  20. Thread-safe Initialization in the main thread
  21. default and delete
  22. Know your Compiler support for static
  23. 3.4Thread-Local Data
  24. From a Single-Threaded to a Multithreaded Program.
  25. 3.5Condition Variables
  26. std::condition_variable_any
  27. 3.5.1The Predicate
  28. 3.5.2Lost Wakeup and Spurious Wakeup
  29. 3.5.3The Wait Workflow
  30. Use a mutex to protect the shared variable
  31. 3.6Cooperative Interruption (C++20)
  32. Killing a Thread is Dangerous
  33. 3.6.1std::stop_source
  34. 3.6.2std::stop_token
  35. 3.6.3std::stop_callback
  36. 3.6.4A General Mechanism to Send Signals
  37. 3.6.5Additional Functionality of std::jthread
  38. 3.6.6New wait Overloads for the condition_variable_any
  39. 3.7Semaphores (C++20)
  40. Edsger W. Dijkstra invented semaphores
  41. 3.8Latches and Barriers (C++20)
  42. 3.8.1std::latch
  43. 3.8.2std::barrier
  44. 3.9Tasks
  45. Regard tasks as data channels between communication endpoints
  46. 3.9.1Tasks versus Threads
  47. 3.9.2std::async
  48. std::async should be your first choice
  49. Eager versus lazy evaluation
  50. 3.9.3std::packaged_task
  51. 3.9.4std::promise and std::future
  52. 3.9.5std::shared_future
  53. 3.9.6Exceptions
  54. std::current_exception and std::make_exception_ptr
  55. 3.9.7Notifications
  56. 3.10Synchronized Outputstreams (C++20)
  57. Distilled Information

4.Parallel Algorithms of the Standard Template Library

  1. 4.1Execution Policies
  2. 4.1.1Parallel and Vectorized Execution
  3. 4.1.2Exceptions
  4. 4.1.3Hazards of Data Races and Deadlocks
  5. 4.2Algorithms
  6. 4.3The New Algorithms
  7. transform_reduce becomes map_reduce
  8. 4.3.1More overloads
  9. 4.3.2The functional Heritage
  10. 4.4Compiler Support
  11. 4.4.1Microsoft Visual Compiler
  12. 4.4.2GCC Compiler
  13. 4.4.3Further Implementations of the Parallel STL
  14. 4.5Performance
  15. Compiler Comparison
  16. 4.5.1Microsoft Visual Compiler
  17. 4.5.2GCC Compiler
  18. Distilled Information

5.Coroutines (C++20)

  1. The Challenge of Understanding Coroutines
  2. 5.1A Generator Function
  3. 5.2Characteristics
  4. 5.2.1Typical Use Cases
  5. 5.2.2Underlying Concepts
  6. 5.2.3Design Goals
  7. 5.2.4Becoming a Coroutine
  8. Distinguish Between the Coroutine Factory and the Coroutine Object
  9. 5.3The Framework
  10. 5.3.1Promise Object
  11. 5.3.2Coroutine Handle
  12. The resumable object requires an inner type promise_type
  13. 5.3.3Coroutine Frame
  14. 5.4Awaitables and Awaiters
  15. 5.4.1Awaitables
  16. 5.4.2The Concept Awaiter
  17. 5.4.3std::suspend_always and std::suspend_never
  18. 5.4.4initial_suspend
  19. 5.4.5final_suspend
  20. 5.4.6Awaiter
  21. awaiter = awaitable
  22. 5.5The Workflows
  23. 5.5.1The Promise Workflow
  24. 5.5.2The Awaiter Workflow
  25. 5.6co_return
  26. 5.6.1A Future
  27. 5.7co_yield
  28. 5.7.1An Infinite Data Stream
  29. 5.8co_await
  30. 5.8.1Starting a Job on Request
  31. 5.8.2Thread Synchronization
  32. 5.9std::generator (C++23)
  33. Distilled Information

6.Case Studies

  1. The Reference PCs
  2. 6.1Calculating the Sum of a Vector
  3. 6.1.1Single-Threaded addition of a Vector
  4. 6.1.2Multi-threaded Summation with a Shared Variable
  5. Reduced Source Files
  6. 6.1.3Thread-Local Summation
  7. 6.1.4Summation of a Vector: The Conclusion
  8. 6.2The Dining Philosophers Problem by Andre Adrian
  9. 6.2.1Multiple Resource Use
  10. 6.2.2Multiple Resource Use with Logging
  11. 6.2.3Erroneous Busy Waiting without Resource Hierarchy
  12. 6.2.4Erroneous Busy Waiting with Resource Hierarchy
  13. 6.2.5Still Erroneous Busy Waiting with Resource Hierarchy
  14. 6.2.6Correct Busy Waiting with Resource Hierarchy
  15. 6.2.7Good low CPU load Busy Waiting with Resource Hierarchy
  16. 6.2.8std::mutex with Resource Hierarchy
  17. 6.2.9std::lock_guard with Resource Hierarchy
  18. 6.2.10std::lock_guard and Synchronized Output with Resource Hierarchy
  19. 6.2.11std::lock_guard and Synchronized Output with Resource Hierarchy and a count
  20. 6.2.12A std::unique_lock using deferred locking
  21. 6.2.13A std::scoped_lock with Resource Hierarchy
  22. 6.2.14The Original Dining Philosophers Problem using Semaphores
  23. 6.2.15A C++20 Compatible Semaphore
  24. 6.3Thread-Safe Initialization of a Singleton
  25. Thoughts about Singletons
  26. 6.3.1Double-Checked Locking Pattern
  27. 6.3.2Performance Measurement
  28. The volatile Variable dummy
  29. 6.3.3Thread-Safe Meyers Singleton
  30. I reduce the examples to the singleton implementation
  31. 6.3.4std::lock_guard
  32. 6.3.5std::call_once with std::once_flag
  33. 6.3.6Atomics
  34. 6.3.7Performance Numbers of the various Thread-Safe Singleton Implementations
  35. 6.4Ongoing Optimization with CppMem
  36. 6.4.1CppMem: Non-Atomic Variables
  37. Guarantees for int variables
  38. Why is the execution consistent?
  39. Using volatile
  40. 6.4.2CppMem: Locks
  41. Using std::lock_guard in CppMem
  42. 6.4.3CppMem: Atomics with Sequential Consistency
  43. 6.4.4CppMem: Atomics with Acquire-Release Semantics
  44. 6.4.5CppMem: Atomics with Non-atomics
  45. 6.4.6CppMem: Atomics with Relaxed Semantic
  46. 6.4.7Conclusion
  47. 6.5Fast Synchronization of Threads
  48. About the Numbers
  49. 6.5.1Condition Variables
  50. 6.5.2std::atomic_flag
  51. 6.5.3std::atomic<bool>
  52. 6.5.4Semaphores
  53. 6.5.5All Numbers
  54. 6.6Variations of Futures
  55. 6.6.1A Lazy Future
  56. Lifetime Challenges of Coroutines
  57. 6.6.2Execution on Another Thread
  58. 6.7Modification and Generalization of a Generator
  59. 6.7.1Modifications
  60. 6.7.2Generalization
  61. 6.8Various Job Workflows
  62. 6.8.1The Transparent Awaiter Workflow
  63. 6.8.2Automatically Resuming the Awaiter
  64. 6.8.3Automatically Resuming the Awaiter on a Separate Thread
  65. 6.9Thread-Safe Queue
  66. Distilled Information

7.The Future of C++

  1. 7.1Executors
  2. 7.1.1A long Way
  3. 7.1.2What is an Executor?
  4. Executors are the Building Blocks
  5. 7.1.3First Examples
  6. 7.1.4Goals of an Executor Concept
  7. 7.1.5Terminology
  8. 7.1.6Execution Functions
  9. 7.1.7A Prototype Implementation
  10. 7.2Extended Futures
  11. 7.2.1Concurrency TS v1
  12. The proposal N3721
  13. 7.2.2Unified Futures
  14. 7.3Transactional Memory
  15. 7.3.1ACI(D)
  16. 7.3.2Synchronized and Atomic Blocks
  17. 7.3.3transaction_safe versus transaction_unsafe Code
  18. 7.4Task Blocks
  19. 7.4.1Fork and Join
  20. HPX (High Performance ParalleX)
  21. 7.4.2define_task_block versus define_task_block_restore_thread
  22. 7.4.3The Interface
  23. 7.4.4The Scheduler
  24. 7.5Data-Parallel Vector Library
  25. Auto-Vectorization
  26. 7.5.1Data-Parallel Vectors
  27. 7.5.2The Interface of the Data-Parallel Vectors
  28. Distilled Information
  29. IIIPatterns

8.Patterns and Best Practices

  1. 8.1History
  2. 8.2Invaluable Value
  3. 8.3Pattern versus Best Practices
  4. 8.4Anti-Pattern
  5. Distilled Information

9.Synchronization Patterns

  1. 9.1Dealing with Sharing
  2. 9.1.1Copied Value
  3. Value Object
  4. 9.1.2Thread-Specific Storage
  5. Use the Algorithms of the Standard Template Library.
  6. 9.1.3Future
  7. 9.2Dealing with Mutation
  8. 9.2.1Scoped Locking
  9. 9.2.2Strategized Locking
  10. Null Object
  11. 9.2.3Thread-Safe Interface
  12. Inline static data members
  13. 9.2.4Guarded Suspension
  14. Distilled Information

10.Concurrent Architecture

  1. 10.1Active Object
  2. 10.1.1Challenges
  3. 10.1.2Solution
  4. 10.1.3Components
  5. 10.1.4Dynamic Behavior
  6. Proxy
  7. 10.1.5Advantages and Disadvantages
  8. 10.1.6Implementation
  9. 10.2Monitor Object
  10. 10.2.1Challenges
  11. 10.2.2Solution
  12. 10.2.3Components
  13. 10.2.4Dynamic Behavior
  14. 10.2.5Advantages and Disadvantages
  15. 10.2.6Implementation
  16. Thread-Safe Queue - Two Serious Errors
  17. 10.3Half-Sync/Half-Async
  18. 10.3.1Challenges
  19. 10.3.2Solution
  20. 10.3.3Components
  21. 10.3.4Dynamic Behavior
  22. 10.3.5Advantages and Disadvantages
  23. 10.3.6Example
  24. 10.4Reactor
  25. 10.4.1Challenges
  26. 10.4.2Solution
  27. 10.4.3Components
  28. Synchronous Event Demultiplexer
  29. 10.4.4Dynamic Behavior
  30. 10.4.5Advantages and Disadvantages
  31. 10.4.6Example
  32. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP)
  33. Acceptor-Connector
  34. 10.5Proactor
  35. 10.5.1Challenges
  36. 10.5.2Solution
  37. 10.5.3Components
  38. 10.5.4Advantages and Disadvantages
  39. Asio
  40. 10.5.5Example
  41. 10.6Further Information
  42. Distilled Information

11.Best Practices

  1. 11.1General
  2. 11.1.1Code Reviews
  3. 11.1.2Minimize Sharing of Mutable Data
  4. 11.1.3Minimize Waiting
  5. 11.1.4Prefer Immutable Data
  6. 11.1.5Use pure functions
  7. 11.1.6Look for the Right Abstraction
  8. 11.1.7Use Static Code Analysis Tools
  9. 11.1.8Use Dynamic Enforcement Tools
  10. 11.2Multithreading
  11. 11.2.1Threads
  12. 11.2.2Data Sharing
  13. 11.2.3Condition Variables
  14. 11.2.4Promises and Futures
  15. 11.3Memory Model
  16. 11.3.1Don’t use volatile for synchronization
  17. 11.3.2Don’t program Lock Free
  18. 11.3.3If you program Lock-Free, use well-established patterns
  19. 11.3.4Don’t build your abstraction, use guarantees of the language
  20. 11.3.5Don’t reinvent the wheel
  21. Distilled Information
  22. IVData Structures

12.General Considerations

  1. 12.1Concurrent Stack
  2. 12.2Locking Strategy
  3. 12.3Granularity of the Interface
  4. 12.4Typical Usage Pattern
  5. 12.4.1Linux (GCC)
  6. 12.4.2Windows (cl.exe)
  7. 12.5Avoidance of Loopholes
  8. 12.6Contention
  9. 12.6.1Single-Threaded Summation without Synchronization
  10. 12.6.2Single-Threaded Summation with Synchronization (lock)
  11. 12.6.3Single-Threaded Summation with Synchronization (atomic)
  12. 12.6.4The Comparison
  13. 12.7Scalability
  14. 12.8Invariants
  15. 12.9Exceptions
  16. Distilled Information

13.Lock-Based Data Structures

  1. 13.1Concurrent Stack
  2. 13.1.1A Stack
  3. 13.2Concurrent Queue
  4. 13.2.1A Queue
  5. 13.2.2Coarse-Grained Locking
  6. 13.2.3Fine-Grained Locking
  7. Distilled Information

14.Lock-Free Data Structures

  1. Design a Lock-Free Data Structure is Very Challenging
  2. 14.1General Considerations
  3. 14.1.1The Next Evolutionary Step
  4. 14.1.2Sequential Consistency
  5. 14.2Concurrent Stack
  6. 14.2.1A Simplified Implementation
  7. push is lock-free but not wait-free
  8. 14.2.2A Complete Implementation
  9. 14.3Concurrent Queue
  10. Distilled Information
  11. VFurther Information

15.Challenges

  1. 15.1ABA Problem
  2. Two new proposals
  3. 15.2Blocking Issues
  4. 15.3Breaking of Program Invariants
  5. 15.4Data Races
  6. 15.5Deadlocks
  7. Locking a non-recursive mutex more than once
  8. 15.6False Sharing
  9. The optimizer detects the false sharing
  10. std::hardware_destructive_interference_size and std::hardware_constructive_interference_size with C++17
  11. 15.7Lifetime Issues of Variables
  12. 15.8Moving Threads
  13. 15.9Race Conditions

16.The Time Library

  1. 16.1The Interplay of Time Point, Time Duration, and Clock
  2. 16.2Time Point
  3. 16.2.1From Time Point to Calendar Time
  4. 16.2.2Cross the valid Time Range
  5. 16.3Time Duration
  6. 16.3.1Calculations
  7. Evaluation at compile time
  8. 16.4Clocks
  9. No guarantees about the accuracy, starting point, and valid time range
  10. 16.4.1Accuracy and Steadiness
  11. 16.4.2Epoch
  12. 16.5Sleep and Wait

17.CppMem - An Overview

  1. 17.1The simplified Overview
  2. 17.1.11. Model
  3. 17.1.22. Program
  4. 17.1.33. Display Relations
  5. 17.1.44. Display Layout
  6. 17.1.55. Model Predicates
  7. 17.1.6The Examples

18.Glossary

  1. 18.1adress_free
  2. 18.2ACID
  3. 18.3CAS
  4. 18.4Callable Unit
  5. 18.5Complexity
  6. 18.6Concepts
  7. 18.7Concurrency
  8. 18.8Critical Section
  9. 18.9Deadlock
  10. 18.10Eager Evaluation
  11. 18.11Executor
  12. 18.12Function Objects
  13. Instantiate function objects to use them
  14. 18.13Lambda Functions
  15. Lambda functions should be your first choice
  16. 18.14Lazy evaluation
  17. 18.15Lock-free
  18. 18.16Lock-based
  19. 18.17Lost Wakeup
  20. 18.18Math Laws
  21. 18.19Memory Location
  22. 18.20Memory Model
  23. 18.21Modification Order
  24. 18.22Monad
  25. 18.23Non-blocking
  26. 18.24obstruction-free
  27. 18.25Parallelism
  28. 18.26Predicate
  29. 18.27Pattern
  30. 18.28RAII
  31. 18.29Release Sequence
  32. 18.30Sequential Consistency
  33. 18.31Sequence Point
  34. 18.32Spurious Wakeup
  35. 18.33Thread
  36. 18.34Total order
  37. 18.35TriviallyCopyable
  38. 18.36Undefined Behavior
  39. 18.37volatile
  40. 18.38wait-free

Index

Get the free sample chapters

Click the buttons to get the free sample in PDF or EPUB, or read the sample online here

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earned over $14 million writing, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub