In C++11, a thread pool is a highly useful concurrency design pattern primarily used for managing and scheduling multiple threads to execute tasks, thereby enhancing program execution efficiency and response time. Prior to C++11, programmers typically relied on operating system APIs or third-party libraries to implement thread pools. However, the C++11 standard introduced enhanced concurrency programming support, including threads (std::thread), mutexes (std::mutex), and condition variables (std::condition_variable), which significantly simplify the implementation of a thread pool.
Basic Concepts and Components of a Thread Pool
A thread pool primarily consists of the following components:
- Task Queue: A queue storing pending tasks, typically implemented as a first-in-first-out (FIFO) structure.
- Worker Threads: A set of threads initialized at construction that continuously fetch tasks from the task queue and execute them.
- Mutex and Condition Variables: Used for synchronizing and coordinating execution between the main thread and worker threads.
Implementing a Simple Thread Pool
The following is a simple example of implementing a thread pool in C++11:
cpp#include <vector> #include <queue> #include <thread> #include <mutex> #include <condition_variable> #include <functional> #include <iostream> class ThreadPool { private: std::vector<std::thread> workers; std::queue<std::function<void()>> tasks; std::mutex queue_mutex; std::condition_variable condition; bool stop; public: ThreadPool(size_t threads) : stop(false) { for(size_t i = 0; i < threads; ++i) workers.emplace_back( [this] { for(;;) { std::function<void()> task; { std::unique_lock<std::mutex> lock(this->queue_mutex); this->condition.wait(lock, [this] { return this->stop || !this->tasks.empty(); }); if(this->stop && this->tasks.empty()) return; task = std::move(this->tasks.front()); this->tasks.pop(); } task(); } } ); } ~ThreadPool() { { std::unique_lock<std::mutex> lock(queue_mutex); stop = true; } condition.notify_all(); for(std::thread &worker: workers) worker.join(); } template<class F, class... Args> auto enqueue(F&& f, Args&&... args) -> std::future<typename std::result_of<F(Args...)>::type> { using return_type = typename std::result_of<F(Args...)>::type; auto task = std::make_shared< std::packaged_task<return_type()> >( std::bind(std::forward<F>(f), std::forward<Args>(args)... ); std::future<return_type> res = task->get_future(); { std::unique_lock<std::mutex> lock(queue_mutex); // Prevent enqueueing after the pool has been stopped if(stop) throw std::runtime_error("enqueue on stopped ThreadPool"); tasks.emplace([task](){ (*task)(); }); } condition.notify_one(); return res; } }; void print_number(int x) { std::cout << "Number: " << x << std::endl; } int main() { ThreadPool pool(4); // Create a thread pool with 4 worker threads // Enqueue tasks to the thread pool for(int i = 0; i < 10; ++i) { pool.enqueue(print_number, i); } return 0; }
Explanation
In the above code, we define a ThreadPool class that initializes a specified number of worker threads. These threads continuously fetch tasks from the task queue for execution. When the enqueue() method is invoked, it adds tasks to the queue and notifies a worker thread via the condition variable.
This example demonstrates how to leverage C++11's concurrency and synchronization mechanisms to implement a basic thread pool. However, in practical applications, thread pool implementations often require additional complexity to handle edge cases and exceptions effectively.